Kubernetes V1.10 二進制部署集羣

1. 架構篇

1.1 kubernetes 架構說明

             

1.2 Flannel網絡架構圖

1.3 Kubernetes工做流程               

           

2. 組件介紹

2.1 Master節點

2.1.1 、網關服務 API Server:提供Kubernetes API接口,主要處理REST操做以及更新ETCD中的對象。全部資源增刪改查的惟一入口   只有API Server才直接操做etcd   其餘模塊經過API Server查詢活修改數據   提供其餘模塊之間的數據交互和通訊的樞紐
2.1.二、 調度器 Scheduler:資源調度,負責分配調度Pod到集羣內的Node節點   監聽kube-apiserver,查詢還未分配Node的Pod   根據調度策略爲這些Pod分配節點
2.1.三、 控制器 Controller Manager:全部其餘羣集級別的功能。目前由控制器Manager執行。資源對象的自動化控制中心。它經過apiserver監控整個集羣的狀態,並確保集羣處於預期的工做狀態。
2.1.四、 存儲 ETCD:全部持久化的狀態信息存儲在ETCD中

2.2 Node節點

2.2.一、Kubelet:管理Pods以及容器、鏡像、Volume等,實現對集羣對節點的管理。
2.2.二、Kube-proxy:提供網絡代理以及負載均衡,實現與Service通訊。
2.2.三、Docker:負責節點的容器管理工做

3.環境說明

3.1 部署節點說明

主機名 IP 用途 部署軟件
linux-node1 172.16.1.31 master apiserver,scheduler,controller-manager
etcd,flanneld
linux-node2 172.16.1.32 node kubelet,kube-proxy
etcd,flanneld
linux-node3 172.16.1.33 node kubelet,kube-proxy
etcd,flanneld

3.2 軟件包版本

軟件包 下載地址
kubernetes-node-linux-amd64.tar.gz https://dl.k8s.io/v1.10.1/kubernetes-node-linux-amd64.tar.gz
kubernetes-server-linux-amd64.tar.gz https://dl.k8s.io/v1.10.1/kubernetes-server-linux-amd64.tar.gz
kubernetes-client-linux-amd64.tar.gz https://dl.k8s.io/v1.10.1/kubernetes-client-linux-amd64.tar.gz
kubernetes.tar.gz https://dl.k8s.io/v1.10.1/kubernetes.tar.gz
flannel-v0.11.0-linux-amd64.tar.gz https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
cni-plugins-amd64-v0.7.1.tgz https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
etcd-v3.2.18-linux-amd64.tar.gz https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz

4.Kubernetes 安裝

4.1 初始化環境

4.1.1、設置關閉防火牆及SELINUX,關閉swap   systemctl stop firewalld && systemctl disable firewalld   setenforce 0   vi /etc/selinux/config   SELINUX=disabled
  swapoff
-a && sysctl -w vm.swappiness=0   sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 4.1.2、下載國內docker源,部署docker   cd /etc/yum.repos.d/   wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo   yum clean all && yum repolist -y   yum install -y docker-ce   systemctl start docker 4.1.3. 準備部署目錄   mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}   # scp -r /opt/kubernetes 172.16.1.32:/opt/   # scp -r /opt/kubernetes 172.16.1.33:/opt/ 4.1.4、添加啓動命令所在目錄環境變量   vim ~/.bash_profile   # .bash_profile   # Get the aliases and functions   if [ -f ~/.bashrc ]; then . ~/.bashrc   fi   # User specific environment and startup programs   PATH=$PATH:$HOME/bin:/opt/kubernetes/bin/   export PATH   source ~/.bash_profile   # scp ~/.bash_profile 172.16.1.32:~/   # scp ~/.bash_profile 172.16.1.33:~/ 4.1.5、配置內核參數【需重啓服務器】   cat /etc/sysctl.conf   net.ipv6.conf.all.disable_ipv6 = 1   net.ipv6.conf.default.disable_ipv6 = 1   net.ipv6.conf.lo.disable_ipv6 = 1   vm.swappiness = 0   net.ipv4.neigh.default.gc_stale_time=120   net.ipv4.ip_forward = 1   # see details in https://help.aliyun.com/knowledge_detail/39428.html   net.ipv4.conf.all.rp_filter=0   net.ipv4.conf.default.rp_filter=0   net.ipv4.conf.default.arp_announce = 2   net.ipv4.conf.lo.arp_announce=2   net.ipv4.conf.all.arp_announce=2   # see details in https://help.aliyun.com/knowledge_detail/41334.html   net.ipv4.tcp_max_tw_buckets = 5000   net.ipv4.tcp_syncookies = 1   net.ipv4.tcp_max_syn_backlog = 1024   net.ipv4.tcp_synack_retries = 2   kernel.sysrq = 1   # iptables透明網橋的實現   net.bridge.bridge-nf-call-ip6tables = 1   net.bridge.bridge-nf-call-iptables = 1   net.bridge.bridge-nf-call-arptables = 1

4.2 安裝製做CA證書工具【kubernetes 系統的各組件須要使用 TLS 證書對通訊進行加密】

4.2.1. 安裝CFSSL   [root@linux-node1 ~]# cd /usr/local/src   [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
  [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
  [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
  [root@linux-node1 src]# chmod +x cfssl*   [root@linux-node1 src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo   [root@linux-node1 src]# mv cfssljson_linux-amd64  /opt/kubernetes/bin/cfssljson   [root@linux-node1 src]# mv cfssl_linux-amd64  /opt/kubernetes/bin/cfssl   #複製cfssl命令文件到k8s-node1和k8s-node2節點。若是實際中多個節點,就都須要同步複製。   # scp /opt/kubernetes/bin/cfssl* 172.16.1.32:/opt/kubernetes/bin/   # scp /opt/kubernetes/bin/cfssl* 172.16.1.33:/opt/kubernetes/bin/
4.2.2. 生成模板文件   [root@linux-node1 ~]# cd /usr/local/src   [root@linux-node1 src]# mkdir ssl && cd ssl   [root@linux-node1 ssl]# cfssl print-defaults config > config.json #默認證書生產策略配置模板   [root@linux-node1 ssl]# cfssl print-defaults csr > csr.json #默認csr請求模板 4.2.3. 建立用來生成CA文件的JSON配置文件   [root@linux-node1 ~]# vim /usr/local/src/ssl/ca-config.json   {    "signing": {    "default": {    "expiry": "8760h"    },    "profiles": {    "kubernetes": {    "usages": [    "signing",    "key encipherment",    "server auth",    "client auth"    ],    "expiry": "8760h"    }    }    }   } 4.2.4.建立用來生成 CA 證書籤名請求(CSR)的 JSON 配置文件   [root@linux-node1 ~]# vim /usr/local/src/ssl/ca-csr.json   {    "CN": "kubernetes",    "key": {    "algo": "rsa",    "size": 2048    },    "names": [    {    "C": "CN",    "ST": "BeiJing",    "L": "BeiJing",    "O": "k8s",    "OU": "System"    }    ]   } 4.2.5. 生成CA證書(ca.pem)和祕鑰(ca-key.pem)   [root@linux-node1 ~]# cd /usr/local/src/ssl   [root@ linux-node1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca #初始化建立CA認證中心,生成 ca-key.pem(私鑰) ca.pem(公鑰)   [root@ linux-node1 ssl]# ls -l ca*   -rw-r--r-- 1 root root 290 Mar 4 13:45 ca-config.json   -rw-r--r-- 1 root root 1001 Mar 4 14:09 ca.csr   -rw-r--r-- 1 root root 208 Mar 4 13:51 ca-csr.json   -rw------- 1 root root 1679 Mar 4 14:09 ca-key.pem   -rw-r--r-- 1 root root 1359 Mar 4 14:09 ca.pem 4.2.6.分發證書   [root@ linux-node1 ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl   SCP證書到k8s-node1和k8s-node2節點   # scp ca.csr ca.pem ca-key.pem ca-config.json 172.16.1.32:/opt/kubernetes/ssl   # scp ca.csr ca.pem ca-key.pem ca-config.json 172.16.1.33:/opt/kubernetes/ssl

4.3 部署ETCD集羣

4.3.1. 準備etcd軟件包   [root@linux-node1 ~]# cd /usr/local/src && wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
  [root@linux-node1 src]# tar zxf etcd-v3.2.18-linux-amd64.tar.gz   [root@linux-node1 src]# cd etcd-v3.2.18-linux-amd64   [root@linux-node1 etcd-v3.2.18-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/   # scp etcd etcdctl 172.16.1.32:/opt/kubernetes/bin/   # scp etcd etcdctl 172.16.1.33:/opt/kubernetes/bin/

4.3.2. 建立etcd證書籤名請求   [root@linux-node1 src]# cd /usr/local/src   [root@linux-node1 src]# vim /usr/local/src/etcd-csr.json   {    "CN": "etcd",    "hosts": [    "127.0.0.1",    "172.16.1.31",    "172.16.1.32",    "172.16.1.33"    ],    "key": {    "algo": "rsa",    "size": 2048    },    "names": [    {    "C": "CN",    "ST": "BeiJing",    "L": "BeiJing",    "O": "k8s",    "OU": "System"    }    ]   } 4.3.3. 生成etcd證書和私鑰   [root@linux-node1 ~]# cd /usr/local/src   [root@linux-node1 src]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd   # 會生成如下證書文件   [root@k8s-master src]# ls -l etcd*
  -rw-r--r-- 1 root root 1045 Mar 5 11:27 etcd.csr   -rw-r--r-- 1 root root 257 Mar 5 11:25 etcd-csr.json   -rw------- 1 root root 1679 Mar 5 11:27 etcd-key.pem   -rw-r--r-- 1 root root 1419 Mar 5 11:27 etcd.pem 4.3.4. 將證書移動到/opt/kubernetes/ssl目錄下   [root@k8s-master src]# cp etcd*.pem /opt/kubernetes/ssl   # scp etcd*.pem 172.16.1.32:/opt/kubernetes/ssl   # scp etcd*.pem 172.16.1.33:/opt/kubernetes/ssl   [root@linux-node1 src]# rm -f etcd.csr etcd-csr.json 4.3.5. 設置etcd配置文件【etcd配置文件需手動建立生成】   #其餘節點 灰色背景標註 須要修改   [root@linux-node1 ~]# vim /opt/kubernetes/cfg/etcd.conf   #[member]   ETCD_NAME="etcd-node1"   ETCD_DATA_DIR="/var/lib/etcd/default.etcd"   #ETCD_SNAPSHOT_COUNTER="10000"   #ETCD_HEARTBEAT_INTERVAL="100"   #ETCD_ELECTION_TIMEOUT="1000"   ETCD_LISTEN_PEER_URLS="https://172.16.1.31:2380"   ETCD_LISTEN_CLIENT_URLS="https://172.16.1.31:2379,https://127.0.0.1:2379"   #ETCD_MAX_SNAPSHOTS="5"   #ETCD_MAX_WALS="5"   #ETCD_CORS=""   #[cluster]   ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.31:2380"   # if you use different ETCD_NAME (e.g. test),   # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."   ETCD_INITIAL_CLUSTER="etcd-node1=https://172.16.1.31:2380,etcd-node2=https://172.16.1.32:2380,etcd-node3=https://172.16.1.33:2380"   ETCD_INITIAL_CLUSTER_STATE="new"   ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"   ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.31:2379"   #[security]   CLIENT_CERT_AUTH="true"   ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"   ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"   ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"   PEER_CLIENT_CERT_AUTH="true"   ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"   ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"   ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

4.3.6. 建立etcd系統服務   [root@linux-node1 ~]# vim /etc/systemd/system/etcd.service   [Unit]   Description=Etcd Server   After=network.target   [Service]   Type=simple   WorkingDirectory=/var/lib/etcd   EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf   # set GOMAXPROCS to number of processors   ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"   Type=notify   [Install]   WantedBy=multi-user.target 4.3.7. 從新加載系統服務   [root@linux-node1 ~]# systemctl daemon-reload   [root@linux-node1 ~]# systemctl enable etcd   # scp /opt/kubernetes/cfg/etcd.conf 172.16.1.32:/opt/kubernetes/cfg/   # scp /opt/kubernetes/cfg/etcd.conf 172.16.1.33:/opt/kubernetes/cfg/   # scp /etc/systemd/system/etcd.service 172.16.1.32:/etc/systemd/system/   # scp /etc/systemd/system/etcd.service 172.16.1.33:/etc/systemd/system/   #在全部節點上建立etcd存儲目錄並啓動etcd   [root@linux-node1 ~]# mkdir /var/lib/etcd   [root@linux-node1 ~]# systemctl start etcd   [root@linux-node1 ~]# systemctl status etcd 4.3.8. 驗證集羣   [root@linux-node1 ~]# etcdctl --endpoints=https://172.16.1.31:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
  member 435fb0a8da627a4c is healthy: got healthy result from https://172.16.1.32:2379
  member 6566e06d7343e1bb is healthy: got healthy result from https://172.16.1.31:2379
  member ce7b884e428b6c8c is healthy: got healthy result from https://172.16.1.33:2379
  cluster is healthy

4.4 Master節點部署 【Kubernetes API服務】

4.4.1.1 【部署Kubernetes API服務部署】準備軟件包 [root@linux-node1 ~]# #cd /usr/local/src && wget https://dl.k8s.io/v1.10.1/kubernetes-server-linux-amd64.tar.gz #須要代理上網下載
   [root@linux-node1 ~]# #cd /usr/local/src && tar xf kubernetes-server-linux-amd64.tar.gz [root@linux-node1 ~]# cd /usr/local/src/kubernetes   [root@linux-node1 kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/   [root@linux-node1 kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/   [root@linux-node1 kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/
4.4.1.2【部署Kubernetes API服務部署】建立生成CSR的 JSON 配置文件   [root@linux-node1 src]# vim /usr/local/src/ssl/kubernetes-csr.json   {    "CN": "kubernetes",    "hosts": [    "127.0.0.1",    "172.16.1.31",    "10.1.0.1",    "kubernetes",    "kubernetes.default",    "kubernetes.default.svc",    "kubernetes.default.svc.cluster",    "kubernetes.default.svc.cluster.local"    ],    "key": {    "algo": "rsa",    "size": 2048    },    "names": [    {    "C": "CN",    "ST": "BeiJing",    "L": "BeiJing",    "O": "k8s",    "OU": "System"    }    ]   } 4.4.1.3【部署Kubernetes API服務部署】生成 kubernetes 證書和私鑰   [root@linux-node1 ssl]# cd /usr/local/src/ssl/   [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes   [root@linux-node1 src]# cp kubernetes*.pem /opt/kubernetes/ssl/   # scp kubernetes*.pem 172.16.1.32:/opt/kubernetes/ssl/   # scp kubernetes*.pem 172.16.1.33:/opt/kubernetes/ssl/ 4.4.1.4【部署Kubernetes API服務部署】建立 kube-apiserver 使用的客戶端 token 文件   [root@linux-node1 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '   cebfb6641d0845bd61808e2337955ea0   [root@linux-node1 ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv   cebfb6641d0845bd61808e2337955ea0,kubelet-bootstrap,10001,"system:kubelet-bootstrap" 4.4.1.5【部署Kubernetes API服務部署】建立基礎用戶名/密碼認證配置   [root@linux-node1 ~]# vim /opt/kubernetes/ssl/basic-auth.csv   admin,admin,1   readonly,readonly,2 4.4.1.6【部署Kubernetes API服務部署】部署Kubernetes API Server (配置文件中指定service對外訪問生成的隨機端口範圍)   [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service   [Unit]   Description=Kubernetes API Server   Documentation=https://github.com/GoogleCloudPlatform/kubernetes   After=network.target   [Service]   ExecStart=/opt/kubernetes/bin/kube-apiserver \    --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \    --bind-address=172.16.1.31 \    --insecure-bind-address=127.0.0.1 \    --authorization-mode=Node,RBAC \    --runtime-config=rbac.authorization.k8s.io/v1 \    --kubelet-https=true \    --anonymous-auth=false \    --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \    --enable-bootstrap-token-auth \    --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \    --service-cluster-ip-range=10.1.0.0/16 \    --service-node-port-range=20000-40000 \    --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \    --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \    --client-ca-file=/opt/kubernetes/ssl/ca.pem \    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \    --etcd-cafile=/opt/kubernetes/ssl/ca.pem \    --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \    --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \    --etcd-servers=https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379 \    --enable-swagger-ui=true \    --allow-privileged=true \    --audit-log-maxage=30 \    --audit-log-maxbackup=3 \    --audit-log-maxsize=100 \    --audit-log-path=/opt/kubernetes/log/api-audit.log \    --event-ttl=1h \    --v=2 \    --logtostderr=false \    --log-dir=/opt/kubernetes/log   Restart=on-failure   RestartSec=5   Type=notify   LimitNOFILE=65536   [Install]   WantedBy=multi-user.target 4.4.1.7【部署Kubernetes API服務部署】啓動API Server服務   [root@linux-node1 ~]# systemctl daemon-reload   [root@linux-node1 ~]# systemctl enable kube-apiserver   [root@linux-node1 ~]# systemctl start kube-apiserver 4.4.1.8【部署Kubernetes API服務部署】查看API Server服務狀態   [root@linux-node1 ~]# systemctl status kube-apiserver 4.4.2.1【部署Controller Manager(控制服務)】配置Controller Manager   [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service   [Unit]   Description=Kubernetes Controller Manager   Documentation=https://github.com/GoogleCloudPlatform/kubernetes   [Service]   ExecStart=/opt/kubernetes/bin/kube-controller-manager \    --address=127.0.0.1 \    --master=http://127.0.0.1:8080 \    --allocate-node-cidrs=true \    --service-cluster-ip-range=10.1.0.0/16 \    --cluster-cidr=10.2.0.0/16 \    --cluster-name=kubernetes \    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \    --root-ca-file=/opt/kubernetes/ssl/ca.pem \    --leader-elect=true \    --v=2 \    --logtostderr=false \    --log-dir=/opt/kubernetes/log   Restart=on-failure   RestartSec=5   [Install]   WantedBy=multi-user.target 4.4.2.2【部署Controller Manager(控制服務)】啓動Controller Manager   [root@linux-node1 ~]# systemctl daemon-reload   [root@linux-node1 scripts]# systemctl enable kube-controller-manager   [root@linux-node1 scripts]# systemctl start kube-controller-manager 4.4.2.3【部署Controller Manager(控制服務)】查看服務狀態   [root@linux-node1 scripts]# systemctl status kube-controller-manager 4.4.3.1【部署Kubernetes Scheduler(調度服務)】配置Kubernetes Scheduler   [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service   [Unit]   Description=Kubernetes Scheduler   Documentation=https://github.com/GoogleCloudPlatform/kubernetes   [Service]   ExecStart=/opt/kubernetes/bin/kube-scheduler \    --address=127.0.0.1 \    --master=http://127.0.0.1:8080 \    --leader-elect=true \    --v=2 \    --logtostderr=false \    --log-dir=/opt/kubernetes/log   Restart=on-failure   RestartSec=5   [Install]   WantedBy=multi-user.target 4.4.3.2【部署Kubernetes Scheduler(調度服務)】部署服務   [root@linux-node1 ~]# systemctl daemon-reload   [root@linux-node1 scripts]# systemctl enable kube-scheduler   [root@linux-node1 scripts]# systemctl start kube-scheduler   [root@linux-node1 scripts]# systemctl status kube-scheduler    4.4.3.3【部署kubectl 命令行工具】準備二進制命令包   [root@linux-node1 ~]# #cd /usr/local/src && wget https://dl.k8s.io/v1.10.1/kubernetes-client-linux-amd64.tar.gz #須要代理上網下載   [root@linux-node1 ~]# #cd /usr/local/src && tar xf kubernetes-client-linux-amd64.tar.gz   [root@linux-node1 ~]# cd /usr/local/src/kubernetes/client/bin   [root@linux-node1 bin]# cp kubectl /opt/kubernetes/bin/ 4.4.3.4【部署kubectl 命令行工具】建立 admin 證書籤名請求   [root@linux-node1 ~]# cd /usr/local/src/ssl/   [root@linux-node1 ssl]# vim admin-csr.json   {    "CN": "admin",    "hosts": [],    "key": {    "algo": "rsa",    "size": 2048    },    "names": [    {    "C": "CN",    "ST": "BeiJing",    "L": "BeiJing",    "O": "system:masters",    "OU": "System"    }    ]   } 4.4.3.5【部署kubectl 命令行工具】生成 admin 證書和私鑰   [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin   [root@linux-node1 ssl]# ls -l admin*   -rw-r--r-- 1 root root 1009 Mar 5 12:29 admin.csr   -rw-r--r-- 1 root root 229 Mar 5 12:28 admin-csr.json   -rw------- 1 root root 1675 Mar 5 12:29 admin-key.pem   -rw-r--r-- 1 root root 1399 Mar 5 12:29 admin.pem   [root@linux-node1 ssl]# mv admin*.pem /opt/kubernetes/ssl/ 4.4.3.6【部署kubectl 命令行工具】設置集羣參數   [root@linux-node1 src]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443   Cluster "kubernetes" set. 4.4.3.7【部署kubectl 命令行工具】設置客戶端認證參數   [root@linux-node1 src]# kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem   User "admin" set. 4.4.3.8【部署kubectl 命令行工具】設置上下文參數   [root@linux-node1 src]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin   Context "kubernetes" created. 4.4.3.9【部署kubectl 命令行工具】設置默認上下文   [root@linux-node1 src]# kubectl config use-context kubernetes   Switched to context "kubernetes". 4.4.3.10【部署kubectl 命令行工具】使用kubectl工具(獲取節點狀態)   [root@linux-node1 ~]# kubectl get cs   NAME STATUS MESSAGE ERROR   controller-manager Healthy ok   scheduler Healthy ok   etcd-1 Healthy {"health":"true"}   etcd-2 Healthy {"health":"true"}   etcd-0 Healthy {"health":"true"}

4.5 Node節點部署

 

4.5.1.1【部署kubelet】二進制包準備 將軟件包從linux-node1複製到linux-node2中去。   [root@linux-node1 bin]# cd /usr/local/src/kubernetes/server/bin/ && cp kubelet kube-proxy /opt/kubernetes/bin/   # scp kubelet kube-proxy 172.16.1.32:/opt/kubernetes/bin/   # scp kubelet kube-proxy 172.16.1.33:/opt/kubernetes/bin/

4.5.1.2【部署kubelet】建立角色綁定   [root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap   clusterrolebinding "kubelet-bootstrap" created 4.5.1.3【部署kubelet】建立 kubelet bootstrapping kubeconfig 文件 設置集羣參數   [root@linux-node1 ~]# kubectl config set-cluster kubernetes  --certificate-authority=/opt/kubernetes/ssl/ca.pem  --embed-certs=true  --server=https://172.16.1.31:6443 --kubeconfig=bootstrap.kubeconfig
  Cluster "kubernetes" set. 4.5.1.4【部署kubelet】設置客戶端認證參數   [root@linux-node1 ~]# kubectl config set-credentials kubelet-bootstrap  --token=cebfb6641d0845bd61808e2337955ea0   --kubeconfig=bootstrap.kubeconfig   User "kubelet-bootstrap" set. 4.5.1.5【部署kubelet】設置上下文參數   [root@linux-node1 ~]# kubectl config set-context default  --cluster=kubernetes  --user=kubelet-bootstrap  --kubeconfig=bootstrap.kubeconfig   Context "default" created. 4.5.1.6【部署kubelet】選擇默認上下文   [root@linux-node1 ~]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig   Switched to context "default".   [root@linux-node1 kubernetes]# cp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig /opt/kubernetes/cfg   # scp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig 172.16.1.32:/opt/kubernetes/cfg   # scp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig 172.16.1.33:/opt/kubernetes/cfg 4.5.1.7【部署kubelet】部署kubelet 1.設置CNI支持   [root@linux-node1 ~]# mkdir -p /etc/cni/net.d   [root@linux-node1 ~]# vim /etc/cni/net.d/10-default.conf   {    "name": "flannel",    "type": "flannel",    "delegate": {    "bridge": "docker0",    "isDefaultGateway": true,    "mtu": 1400    }   }   # scp -r /etc/cni/net.d 172.16.1.32:/etc/cni/   # scp -r /etc/cni/net.d 172.16.1.33:/etc/cni/

4.5.1.8【部署kubelet】建立kubelet目錄   [root@linux-node1 ~]# mkdir /var/lib/kubelet   # scp -r /var/lib/kubelet 172.16.1.32:/var/lib/   # scp -r /var/lib/kubelet 172.16.1.33:/var/lib/

4.5.1.9【部署kubelet】建立kubelet服務配置   # 灰色部分須要修改   [root@k8s-node1 ~]# vim /usr/lib/systemd/system/kubelet.service   [Unit]   Description=Kubernetes Kubelet   Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  After=docker.service   Requires=docker.service   [Service]   WorkingDirectory=/var/lib/kubelet   ExecStart=/opt/kubernetes/bin/kubelet \    --address=172.16.1.31 \    --hostname-override=172.16.1.31 \    --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \    --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \    --cert-dir=/opt/kubernetes/ssl \    --network-plugin=cni \    --cni-conf-dir=/etc/cni/net.d \    --cni-bin-dir=/opt/kubernetes/bin/cni \    --cluster-dns=10.1.0.2 \    --cluster-domain=cluster.local. \    --hairpin-mode hairpin-veth \    --allow-privileged=true \    --fail-swap-on=false \    --logtostderr=true \    --v=2 \    --logtostderr=false \    --log-dir=/opt/kubernetes/log   Restart=on-failure   RestartSec=5   # scp /usr/lib/systemd/system/kubelet.service 172.16.1.32:/usr/lib/systemd/system/   # scp /usr/lib/systemd/system/kubelet.service 172.16.1.33:/usr/lib/systemd/system/

4.5.1.10【部署kubelet】啓動Kubelet   [root@linux-node2 ~]# systemctl daemon-reload   [root@linux-node2 ~]# systemctl enable kubelet   [root@linux-node2 ~]# systemctl start kubelet   [root@linux-node3 ~]# systemctl daemon-reload   [root@linux-node3 ~]# systemctl enable kubelet   [root@linux-node3 ~]# systemctl start kubelet 4.5.1.11【部署kubelet】查看服務狀態   [root@linux-node2 kubernetes]# systemctl status kubelet 4.5.1.12 查看csr請求 注意是在linux-node1上執行。   [root@linux-node1 ~]# kubectl get csr   NAME AGE REQUESTOR CONDITION   node-csr-0_w5F1FM_la_SeGiu3Y5xELRpYUjjT2icIFk9gO9KOU 1m kubelet-bootstrap Pending 4.5.1.13【部署kubelet】批准kubelet 的 TLS 證書請求   [root@linux-node1 ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve certificatesigningrequest.certificates.k8s.io "node-csr-QCgiejwSx_bPgcBLNxHkMHs-lzNAY-bJNgm4skUMqII" approved   執行完畢後,查看節點狀態已是Ready的狀態了   [root@linux-node1 ssl]# kubectl get node NAME STATUS ROLES AGE VERSION EXTERNAL-IP   OS-IMAGE KERNEL-VERSION                       CONTAINER-RUNTIME 172.16.1.32 Ready   <none>   10m   v1.10.1    <none>       CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64     docker://19.3.5
    172.16.1.33 Ready   <none>   10m   v1.10.1    <none>       CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64   docker://19.3.5

4.5.2.1【部署Kubernetes Proxy】配置kube-proxy使用LVS   [root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack 4.5.2.2【部署Kubernetes Proxy】建立 kube-proxy 證書請求   [root@linux-node1 ~]# cd /usr/local/src/ssl/   [root@linux-node1 ssl]# vim kube-proxy-csr.json   {    "CN": "system:kube-proxy",    "hosts": [],    "key": {    "algo": "rsa",    "size": 2048    },    "names": [    {    "C": "CN",    "ST": "BeiJing",    "L": "BeiJing",    "O": "k8s",    "OU": "System"    }    ]   } 4.5.2.3【部署Kubernetes Proxy】生成證書   [root@linux-node1ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 4.5.2.4【部署Kubernetes Proxy】分發證書到全部Node節點   [root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/   # scp kube-proxy*.pem 172.16.1.32:/opt/kubernetes/ssl/   # scp kube-proxy*.pem 172.16.1.33:/opt/kubernetes/ssl/

4.5.2.5【部署Kubernetes Proxy】建立kube-proxy配置文件   [root@linux-node1 ssl]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443 --kubeconfig=kube-proxy.kubeconfig
  Cluster "kubernetes" set.   [root@linux-node1 ssl]# kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig   User "kube-proxy" set.   [root@linux-node1 ssl]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig   Context "default" created.   [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig   Switched to context "default". 4.5.2.6【部署Kubernetes Proxy】分發kubeconfig配置文件   [root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/   # scp kube-proxy.kubeconfig 172.16.1.32:/opt/kubernetes/cfg/   # scp kube-proxy.kubeconfig 172.16.1.33:/opt/kubernetes/cfg/

4.5.2.7【部署Kubernetes Proxy】建立kube-proxy服務配置   [root@linux-node1 ~]# mkdir /var/lib/kube-proxy   # scp -r /var/lib/kube-proxy 172.16.1.32:/var/lib/   # scp -r /var/lib/kube-proxy 172.16.1.33:/var/lib/   #各節點灰色部分 須要修改   [root@k8s-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service   [Unit]   Description=Kubernetes Kube-Proxy Server   Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  After=network.target   [Service]   WorkingDirectory=/var/lib/kube-proxy   ExecStart=/opt/kubernetes/bin/kube-proxy \    --bind-address=172.16.1.31 \    --hostname-override=172.16.1.31 \    --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \    --masquerade-all \    --feature-gates=SupportIPVSProxyMode=true \    --proxy-mode=ipvs \    --ipvs-min-sync-period=5s \    --ipvs-sync-period=5s \    --ipvs-scheduler=rr \    --logtostderr=true \    --v=2 \    --logtostderr=false \    --log-dir=/opt/kubernetes/log   Restart=on-failure   RestartSec=5   LimitNOFILE=65536   [Install]   WantedBy=multi-user.target   # scp /usr/lib/systemd/system/kube-proxy.service 172.16.1.32:/usr/lib/systemd/system/   # scp /usr/lib/systemd/system/kube-proxy.service 172.16.1.33:/usr/lib/systemd/system/ 
4.5.2.8【部署Kubernetes Proxy】啓動Kubernetes Proxy(**Node節點啓動)   [root@linux-node2 ~]# systemctl daemon-reload   [root@linux-node2 ~]# systemctl enable kube-proxy   [root@linux-node2 ~]# systemctl start kube-proxy   [root@linux-node3 ~]# systemctl daemon-reload   [root@linux-node3 ~]# systemctl enable kube-proxy   [root@linux-node3 ~]# systemctl start kube-proxy 4.5.2.9【部署Kubernetes Proxy】查看服務狀態 查看kube-proxy服務狀態   [root@linux-node2 scripts]# systemctl status kube-proxy   檢查LVS狀態   [root@linux-node2 ~]# ipvsadm -L -n   IP Virtual Server version 1.2.1 (size=4096)   Prot LocalAddress:Port Scheduler Flags   -> RemoteAddress:Port Forward Weight ActiveConn InActConn   TCP 10.1.0.1:443 rr persistent 10800
  -> 172.16.1.31:6443 Masq 1 0 0   若是你在兩臺實驗機器都安裝了kubelet和proxy服務,使用下面的命令能夠檢查狀態:   [root@linux-node1 ssl]# kubectl get node   NAME STATUS ROLES AGE VERSION   172.16.1.32   Ready    <none>   22m   v1.10.1
  172.16.1.33   Ready    <none>   3m    v1.10.1

4.6 flanal網絡部署

4.6.1 爲Flannel建立證書   [root@linux-node1 ~]#cd /usr/local/src/ssl   [root@linux-node1 ssl]# vim flanneld-csr.json   {    "CN": "flanneld",    "hosts": [],    "key": {    "algo": "rsa",    "size": 2048    },    "names": [    {    "C": "CN",    "ST": "BeiJing",    "L": "BeiJing",    "O": "k8s",    "OU": "System"    }    ]   } 4.6.2 生成證書   [root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld   [root@linux-node1 ssl]# ls flanneld*.pem flanneld-key.pem flanneld.pem   [root@linux-node1 ssl]# ls -l flanneld*.pem -rw------- 1 root root 1675 Dec 27 18:55 flanneld-key.pem -rw-r--r-- 1 root root 1391 Dec 27 18:55 flanneld.pem 4.6.3 分發證書   [root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/   # scp flanneld*.pem 172.16.1.32:/opt/kubernetes/ssl/   # scp flanneld*.pem 172.16.1.33:/opt/kubernetes/ssl/ 4.6.4 下載Flannel軟件包   [root@linux-node1 ~]# cd /usr/local/src && wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz   [root@linux-node1 src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz   [root@linux-node1 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/   #複製到linux-node2節點   # scp flanneld mk-docker-opts.sh 172.16.1.32:/opt/kubernetes/bin/   # scp flanneld mk-docker-opts.sh 172.16.1.33:/opt/kubernetes/bin/   #複製對應腳本到/opt/kubernetes/bin目錄下。
  [root@linux-node1 ~]# wget https://dl.k8s.io/v1.10.1/kubernetes.tar.gz #須要代理上網下載此包   [root@linux-node1 ~]# tar xf kubernetes.tar.gz -C /usr/local/src/ && cd /usr/local/src/kubernetes/cluster/centos/node/bin/   [root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/   # scp remove-docker0.sh 172.16.1.32:/opt/kubernetes/bin/   # scp remove-docker0.sh 172.16.1.33:/opt/kubernetes/bin/ 4.6.5 配置Flannel   [root@linux-node1 ~]# vim /opt/kubernetes/cfg/flannel   FLANNEL_ETCD="-etcd-endpoints=https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379"   FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"   FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"   FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"   FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"   #複製配置到其它節點上   # scp /opt/kubernetes/cfg/flannel 172.16.1.32:/opt/kubernetes/cfg/   # scp /opt/kubernetes/cfg/flannel 172.16.1.33:/opt/kubernetes/cfg/ 4.6.6 設置Flannel系統服務   [root@linux-node1 ~]# vim /usr/lib/systemd/system/flannel.service   [Unit]   Description=Flanneld overlay address etcd agent   After=network.target   Before=docker.service   [Service]   EnvironmentFile=-/opt/kubernetes/cfg/flannel   ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh   ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}   ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker   Type=notify   [Install]   WantedBy=multi-user.target   RequiredBy=docker.service   複製系統服務腳本到其它節點上   # scp /usr/lib/systemd/system/flannel.service 172.16.1.32:/usr/lib/systemd/system/   # scp /usr/lib/systemd/system/flannel.service 172.16.1.33:/usr/lib/systemd/system/ 4.6.7【Flannel CNI集成】下載CNI插件   [root@linux-node1 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz   [root@linux-node1 ~]# mkdir /opt/kubernetes/bin/cni   [root@linux-node1 ~]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni   # scp -r /opt/kubernetes/bin/cni 172.16.1.32:/opt/kubernetes/bin/   # scp -r /opt/kubernetes/bin/cni 172.16.1.33:/opt/kubernetes/bin/ 4.6.8【Flannel CNI集成】建立Etcd的key   [root@linux-node1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem --no-sync -C https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379 mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1 4.6.9【Flannel CNI集成】啓動flannel (全部節點都啓動)   [root@linux-node1 ~]# systemctl daemon-reload   [root@linux-node1 ~]# systemctl enable flannel   [root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/*   [root@linux-node1 ~]# systemctl start flannel 4.6.10【Flannel CNI集成】查看服務狀態   [root@linux-node1 ~]# systemctl status flannel 4.6.11【Flannel CNI集成】配置Docker使用Flannel   [root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service   [Unit] #在Unit下面修改After和增長Requires   After=network-online.target firewalld.service flannel.service   Wants=network-online.target   Requires=flannel.service #docker啓動 依賴flannel網絡   [Service] #增長EnvironmentFile=-/run/flannel/docker   Type=notify   EnvironmentFile=-/run/flannel/docker   ExecStart=/usr/bin/dockerd $DOCKER_OPTS   #將配置複製到另外兩個節點   # scp /usr/lib/systemd/system/docker.service 172.16.1.32:/usr/lib/systemd/system/   # scp /usr/lib/systemd/system/docker.service 172.16.1.33:/usr/lib/systemd/system/ 4.6.12【Flannel CNI集成】重啓Docker (全部節點重啓)   [root@linux-node1 ~]# systemctl daemon-reload   [root@linux-node1 ~]# systemctl restart docker

4.7 CoreDNS部署

4.7.1 編寫corDNS yaml文件 [root@linux-node1 ~]# vim coredns.yaml apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: Corefile: | .:53 { errors health kubernetes cluster.local. in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 } --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: coredns template: metadata: labels: k8s-app: coredns spec: serviceAccountName: coredns tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule - key: "CriticalAddonsOnly"
              operator: "Exists" containers: - name: coredns image: coredns/coredns:1.0.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: coredns clusterIP: 10.1.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP 4.7.2 部署coredns [root@linux-node1 ~]# kubectl create -f coredns.yaml 4.7.3 測試DNS是否配置成功 [root@linux-node1 ~]# kubectl run dns-test --rm -it --image=alpine /bin/sh If you don't see a command prompt, try pressing enter.
    / # ping www.baidu.com -c 2 PING www.baidu.com (61.135.169.125): 56 data bytes 64 bytes from 61.135.169.125: seq=0 ttl=127 time=5.718 ms 64 bytes from 61.135.169.125: seq=1 ttl=127 time=5.695 ms --- www.baidu.com ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 5.695/5.706/5.718 ms / #

4.8 dashboard部署

4.8.1 建立dashboard yaml存放目錄【自定義建立】 [root@linux-node1 ~]# mkdir -p /root/dashboard_yaml_dir
4.8.2 編寫admin-user-sa-rbac.yaml文件 [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/admin-user-sa-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system
4.8.3 編寫kubernetes-dashboard.yaml文件 [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/kubernetes-dashboard.yaml # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0
 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Configuration to deploy release version of the Dashboard UI compatible with # Kubernetes 1.8. # # Example usage: kubectl create -f <this_file> # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port
 volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard type: NodePort
4.8.4 編寫ui-admin-rbac.yaml文件 [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/ui-admin-rbac.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ui-admin rules: - apiGroups: - "" resources: - services - services/proxy verbs: - '*'
  --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ui-admin-binding namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ui-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: admin
4.8.5 編寫ui-read-rbac.yaml文件 [root@linux-node1 ~]# vim /root/dashboard_yaml_dir/ui-read-rbac.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ui-read rules: - apiGroups: - "" resources: - services - services/proxy verbs: - get
    - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ui-read-binding namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ui-read subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: readonly
4.8.6 建立Dashboard [root@linux-node1 ~]# kubectl create -f /root/dashboard_yaml_dir/ [root@linux-node1 ~]# kubectl cluster-info Kubernetes master is running at https://172.16.1.31:6443
  kubernetes-dashboard is running at https://172.16.1.31:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
  To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
4.8.7 訪問Dashboard https://172.16.1.31:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
  用戶名:admin 密碼:admin 選擇Token令牌模式登陸。
4.8.8 獲取Token kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
相關文章
相關標籤/搜索