本例使用kubeadmin工具安裝。須要 Kubelet 、docker 運行在全部host上。
前提:時間同步、關閉 firewalld 、iptables 、本地解析(/etc/hosts) (過程略)
本地環境:docker7九、docker7八、docker77;IP地址分別爲192.168.20.7九、192.168.20.7八、192.168.20.77
一、配置yum html
[root@docker79 ~]# wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg [root@docker79 ~]# wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg [root@docker79 ~]# rpm --import rpm-package-key.gpg [root@docker79 ~]# rpm --import yum-key.gpg [root@docker79 ~]# vim /etc/yum.repos.d/kubernetes.repo [root@docker79 ~]# cat /etc/yum.repos.d/kubernetes.repo [kubernetes] name=kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 [root@docker79 ~]# [root@docker79 ~]# cd /etc/yum.repos.d/ [root@docker79 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@docker79 yum.repos.d]# [root@docker79 ~]# yum repolist 源標識 源名稱 狀態 base/7/x86_64 CentOS-7 - Base 9,911 docker-ce-stable/x86_64 Docker CE Stable - x86_64 17 extras/7/x86_64 CentOS-7 - Extra 401 kubernetes kubernetes 246 updates/7/x86_64 CentOS-7 - Updates 1,308 repolist: 11,883 [root@docker79 ~]# [root@docker79 ~]# yum install docker-ce kubelet kubeadm kubectl ipvsadm ……… 已安裝: docker-ce.x86_64 0:18.06.1.ce-3.el7 kubeadm.x86_64 0:1.11.2-0 kubectl.x86_64 0:1.11.2-0 kubelet.x86_64 0:1.11.2-0 做爲依賴被安裝: audit-libs-python.x86_64 0:2.8.1-3.el7_5.1 checkpolicy.x86_64 0:2.5-6.el7 container-selinux.noarch 2:2.68-1.el7 cri-tools.x86_64 0:1.11.0-0 kubernetes-cni.x86_64 0:0.6.0-0 libcgroup.x86_64 0:0.41-15.el7 libseccomp.x86_64 0:2.3.1-3.el7 libsemanage-python.x86_64 0:2.5-11.el7 policycoreutils-python.x86_64 0:2.5-22.el7 python-IPy.noarch 0:0.75-6.el7 setools-libs.x86_64 0:3.3.8-2.el7 socat.x86_64 0:1.7.3.2-2.el7 做爲依賴被升級: audit.x86_64 0:2.8.1-3.el7_5.1 audit-libs.x86_64 0:2.8.1-3.el7_5.1 完畢!
二、設置Dockernode
[root@docker79 ~]# vim /usr/lib/systemd/system/docker.service [root@docker79 ~]# grep Environment /usr/lib/systemd/system/docker.service Environment="HTTPS_PROXY=http://proxy.domainname.com:3128" Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16" [root@docker79 ~]# systemctl daemon-reload [root@docker79 ~]# systemctl start docker [root@docker79 ~]# docker --version Docker version 18.06.1-ce, build e68fc7a [root@docker79 ~]# docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e runc version: 69663f0bd4b60df09991c08812a60108003fa340 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-862.3.3.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 15.51GiB Name: docker79 ID: SQLL:4WEQ:POHM:S5F6:6GPY:UIUK:L3XQ:DI7K:WT47:JPVF:AFLG:LPET Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false **HTTPS Proxy: http://proxy.domainname.com:3128** **No Proxy: 127.0.0.0/8,192.168.0.0/16** Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [root@k8s-master-dev ~]#
說明:
本例中docker 的 Cgroup Driver: 類型爲 cgroupfs ,因此在k8s集羣初始化時或node加入集羣時都會有關於Cgroup Driver的Warning提示,若是須要將Cgroup Driver 的類型指定爲systemd,可在/etc/docker/目錄中建立daemon.json文件 ,內容以下:python
[root@k8s-master-dev ~]# cat /etc/docker/daemon.json {"registry-mirrors": ["http://9645cd65.m.daocloud.io"]}
而後須要重啓docker 生效linux
[root@docker79 ~]# tail -2 /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 [root@docker79 ~]# sysctl --system [root@docker79 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 [root@docker79 ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 [root@docker79 ~]# [root@docker79 ~]# rpm -ql kubelet /etc/kubernetes/manifests /etc/sysconfig/kubelet /etc/systemd/system/kubelet.service /usr/bin/kubelet [root@docker79 ~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS= (設置忽略swap)
三、kubeadm 部署kubernetesnginx
[root@docker79 ~]# systemctl stop kubelet [root@docker79 ~]# systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. [root@docker79 ~]# systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@docker79 ~]# [root@docker79 ~]# vim /etc/sysconfig/kubelet [root@docker79 ~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" [root@docker79 ~]#
說明:本例中我先將所須要images 下載並找包成gz文件,若是使用不一樣版本或不一樣環境 中能夠使用 kubeadm config images list 命令查看 所須要images及版本,而後再使用docker pull 命令將其下載。git
[root@docker79 ~]# tar xfz k8s.gcr.io.tar.gz [root@docker79 ~]# cd k8s.gcr.io [root@docker79 k8s.gcr.io]# for image in `ls`; do docker load < $image ; done 5bef08742407: Loading layer [=================>] 4.221MB/4.221MB 594f5d257cbe: Loading layer [=================>] 9.335MB/9.335MB 53cb05deeb1b: Loading layer [=================>] 32.66MB/32.66MB Loaded image: k8s.gcr.io/coredns:1.1.3 0314be9edf00: Loading layer [=================>] 1.36MB/1.36MB fd10c5022c9c: Loading layer [=================>] 194.9MB/194.9MB 77066773b816: Loading layer [=================>] 22.9MB/22.9MB Loaded image: k8s.gcr.io/etcd-amd64:3.2.18 f9d9e4e6e2f0: Loading layer [=================>] 1.378MB/1.378MB 428ad8419125: Loading layer [=================>] 185.5MB/185.5MB Loaded image: k8s.gcr.io/kube-apiserver-amd64:v1.11.2 d5095f39a884: Loading layer [=================>] 154.1MB/154.1MB Loaded image: k8s.gcr.io/kube-controller-manager-amd64:v1.11.2 582b548209e1: Loading layer [=================>] 44.2MB/44.2MB e20569a478ed: Loading layer [=================>] 3.358MB/3.358MB ada0a9dc1320: Loading layer [=================>] 52.06MB/52.06MB Loaded image: k8s.gcr.io/kube-proxy-amd64:v1.11.2 a4a9cf804060: Loading layer [=================>] 55.61MB/55.61MB Loaded image: k8s.gcr.io/kube-scheduler-amd64:v1.11.2 e17133b79956: Loading layer [=================>] 744.4kB/744.4kB Loaded image: k8s.gcr.io/pause:3.1 [root@docker79 k8s.gcr.io]# [root@docker79 ~]# kubeadm init --help [root@docker79 ~]#kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap [init] using Kubernetes version: v1.11.2 ......... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.20.79:6443 --token c0kgd5.pqmp9p4luuwmvv1x --discovery-token-ca-cert-hash sha256:d69f0a84d23d143f7f48aaec0c19e0f268fad8e9145dec37cb3989f55eb8a534 [root@docker79 ~]# [root@docker79 ~]# mkdir .kube [root@docker79 ~]# cp -i /etc/kubernetes/admin.conf .kube/config [root@docker79 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [root@docker79 ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [root@docker79 ~]# [root@docker79 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION docker79 NotReady master 14m v1.11.2 [root@docker79 ~]#
四、部署flannel
參考 :https://github.com/coreos/flannelgithub
[root@docker79 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@docker79 ~]# ls kube-flannel.yml kube-flannel.yml [root@docker79 ~]# kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created [root@docker79 ~]# [root@docker79 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION docker79 *NotReady* master 9m v1.11.2 [root@docker79 ~]# 說明:須要等待一下子(flannel相關pod處於running) status的狀態會變至 Ready [root@docker79 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION docker79 Ready master 25m v1.11.2 [root@docker79 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-fhcqd 1/1 Running 0 25m coredns-78fcdf6894-pqjm4 1/1 Running 0 25m etcd-docker79 1/1 Running 0 24m kube-apiserver-docker79 1/1 Running 0 24m kube-controller-manager-docker79 1/1 Running 0 25m kube-flannel-ds-amd64-rlk27 1/1 Running 0 2m kube-proxy-s7q7r 1/1 Running 0 25m kube-scheduler-docker79 1/1 Running 0 24m [root@docker79 ~]# [root@docker79 ~]# kubectl get ns NAME STATUS AGE default Active 26m kube-public Active 26m kube-system Active 26m [root@docker79 ~]#
五、將各node 加入kubernetes cluster web
[root@docker79 ~]# scp /etc/yum.repos.d/kubernetes.repo docker78:/etc/yum.repos.d/ [root@docker79 ~]# scp /etc/yum.repos.d/docker-ce.repo docker78:/etc/yum.repos.d/ [root@docker79 ~]# scp \*.gpg docker78:/root/ [root@docker79 ~]# scp k8s.gcr.io.tar.gz docker78:/root/ [root@docker79 ~]# scp /etc/sysctl.conf docker78:/etc/ [root@docker79 ~]# scp /etc/sysconfig/kubelet docker78:/etc/sysconfig/ [root@docker**78** ~]# rpm --import rpm-package-key.gpg [root@docker**78** ~]# rpm --import yum-key.gpg [root@docker**78** ~]# yum install docker-ce kubelet kubeadm kubectl …………… [root@docker**79 **~]# scp /usr/lib/systemd/system/docker.service [root@docker**79 **~]# scp /etc/sysconfig/kubelet docker78:/etc/sysconfig/ [root@docker**79** ~]# [root@docker**78** ~]# systemctl enable docker [root@docker**78** ~]# systemctl enable kubelet [root@docker**78** ~]# systemctl start docker [root@docker78 ~]# tar xfz k8s.gcr.io.tar.gz [root@docker78 ~]# cd k8s.gcr.io [root@docker78 k8s.gcr.io]# for image in `ls` ; do docker load < $image ; done [root@docker78 ~]# kubeadm join 192.168.20.79:6443 --token c0kgd5.pqmp9p4luuwmvv1x --discovery-token-ca-cert-hash sha256:d69f0a84d23d143f7f48aaec0c19e0f268fad8e9145dec37cb3989f55eb8a534 --ignore-preflight-errors=Swap ......
docker77 操做相同(略)docker
六、補充:將node 移除k8s cluster
1) kubectl drain nodename --delete-local-data --force --ignore-daemonsets #刪除一個節點前,先驅逐上面的pod
2) kubectl delete node nodename #而後刪除節點
3) 在被移除的node上執行 kubeadm reset ,不然該節點再次從新加入k8s集羣時會失敗shell
七、kubectl基本使用
Kubectl 命令鏈接 API server,實現 各類資源的增、刪、改、查。可管理的對象不少:pod/service/controller (replicaset,deployment, statefulet,daemonset,job,cronjob,node)。
(1) 查看cluster 信息
[root@docker79 ~]# kubectl cluster-info Kubernetes master is running at https://192.168.20.79:6443 KubeDNS is running at https://192.168.20.79:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
(2) 查看版本信息
[root@docker79 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} [root@docker79 ~]#
(3) kubectl 基礎使用
[root@docker79 ~]# **kubectl run nginx-deploy --image=nginx:latest --replicas=1 --dry-run** deployment.apps/nginx-deploy created (dry run) [root@docker79 ~]# kubectl run nginx-deploy --image=nginx:latest --replicas=1 deployment.apps/nginx-deploy created [root@docker79 ~]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deploy 1 1 1 0 12s [root@docker79 ~]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deploy 1 1 1 1 1m [root@docker79 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deploy-7f497cdbcf-k8rgl 1/1 Running 0 1m [root@docker79 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deploy-7f497cdbcf-k8rgl 1/1 Running 0 1m ** 10.244.1.2** **docker78** <none> [root@docker79 ~]# [root@docker79 ~]# **kubectl get nodes --show-labels** NAME STATUS ROLES AGE VERSION LABELS docker77 Ready <none> 15m v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker77 docker78 Ready <none> 15m v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker78 docker79 Ready master 29m v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker79,node-role.kubernetes.io/master= [root@docker79 ~]# [root@docker79 ~]# **elinks --dump http://10.244.1.2 ** (訪問pod IP) Welcome to nginx! If you see this page, the nginx web server is successfully installed and working. Further configuration is required. For online documentation and support please refer to [1]nginx.org. Commercial support is available at [2]nginx.com. Thank you for using nginx. References Visible links 1. http://nginx.org/ 2. http://nginx.com/ [root@docker79 ~]# [root@docker79 ~]# kubectl delete pod nginx-deploy-7f497cdbcf-k8rgl pod "nginx-deploy-7f497cdbcf-k8rgl" deleted [root@docker79 ~]# kubectl get pods -o wide (刪除pod以後controller自動再次建立) NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deploy-7f497cdbcf-r2tn6 1/1 Running 0 17s **10.244.1.3** docker78 <none> [root@docker79 ~]# [root@docker79 ~]#** kubectl expose deployment nginx-deploy --name=nginx --port=80 --target-port=80 --protocol=TCP --type=ClusterIP** service/nginx exposed [root@docker79 ~]# kubectl get svc (建立service , service信息以下) NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40m nginx ClusterIP 10.100.115.103 <none> 80/TCP 9s [root@docker79 ~]# **elinks --dump http://10.100.115.103 ** (訪問service IP) Welcome to nginx! If you see this page, the nginx web server is successfully installed and working. Further configuration is required. For online documentation and support please refer to [1]nginx.org. Commercial support is available at [2]nginx.com. Thank you for using nginx. References Visible links 1. http://nginx.org/ 2. http://nginx.com/ [root@docker79 ~]# [root@docker79 ~]# **kubectl get pods -n kube-system** (關注DNS pod) NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-lkkxl 1/1 Running 0 45m coredns-78fcdf6894-rw42j 1/1 Running 0 45m etcd-docker79 1/1 Running 0 44m kube-apiserver-docker79 1/1 Running 0 44m kube-controller-manager-docker79 1/1 Running 0 44m kube-flannel-ds-amd64-kx6x6 1/1 Running 0 31m kube-flannel-ds-amd64-tz62z 1/1 Running 0 33m kube-flannel-ds-amd64-wcnmh 1/1 Running 0 31m kube-proxy-h6ltq 1/1 Running 0 31m kube-proxy-lwqkr 1/1 Running 0 31m kube-proxy-wz6pz 1/1 Running 0 45m kube-scheduler-docker79 1/1 Running 0 44m [root@docker79 ~]# [root@docker79 ~]# **kubectl get svc -n kube-system** (系統DNS的service以下) NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 46m [root@docker79 ~]# [root@docker79 ~]# **kubectl run client --image=busybox:latest --replicas=1 -it --restart=Never** If you don't see a command prompt, try pressing enter. / # **cat /etc/resolv.conf** nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 / # / #** wget -O - -q http://nginx** (利用DNS可解析 service name) <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> …... [root@docker79 ~]# **dig -t A nginx.default.svc.cluster.local @10.96.0.10** ...... ;; ANSWER SECTION: nginx.default.svc.cluster.local. 5 IN A 10.100.115.103 ;; Query time: 1 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: 日 9月 23 18:56:58 CST 2018 ;; MSG SIZE rcvd: 107 [root@docker79 ~]# [root@docker79 ~]# **kubectl scale --replicas=2 deployment nginx-deploy** (將原deploy擴展) deployment.extensions/nginx-deploy scaled [root@docker79 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE client 0/1 Completed 0 8m 10.244.1.4 docker78 <none> nginx-deploy-7f497cdbcf-4wjfr 0/1 ErrImagePull 0 10s 10.244.2.2 docker77 <none> nginx-deploy-7f497cdbcf-r2tn6 1/1 Running 0 20m 10.244.1.3 docker78 <none> [root@docker79 ~]# [root@docker**78 **~]#** docker images** REPOSITORY TAG IMAGE ID CREATED SIZE ***nginx *** latest 06144b287844 2 weeks ago 109MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.2 38521457c799 6 weeks ago 155MB k8s.gcr.io/kube-proxy-amd64 v1.11.2 46a3cd725628 6 weeks ago 97.8MB k8s.gcr.io/kube-apiserver-amd64 v1.11.2 821507941e9c 6 weeks ago 187MB k8s.gcr.io/kube-scheduler-amd64 v1.11.2 37a1403e6c1a 6 weeks ago 56.8MB k8s.gcr.io/coredns 1.1.3 b3b94275d97c 4 months ago 45.6MB k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 5 months ago 219MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 8 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 9 months ago 742kB quay.io/coreos/flannel v0.9.1 2b736d06ca4c 10 months ago 51.3MB [root@docker78 ~]#** docker ps** CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 532bb3de6463 nginx "nginx -g 'daemon of…" 2 minutes ago Up 2 minutes \* k8snginx-deploynginx-deploy-*7f497cdbcf-k8rgl_default_d191ad57-bf1b-11e8-aca7-000c295011ce_0 4ec26e0ccf80 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_nginx-deploy-7f497cdbcf-k8rgl_default_d191ad57-bf1b-11e8-aca7-000c295011ce_0 e265f9a85a24 f0fad859c909 "/opt/bin/flanneld -…" 16 minutes ago Up 16 minutes k8s_kube-flannel_kube-flannel-ds-amd64-wcnmh_kube-system_07aa1723-bf1a-11e8-aca7-000c295011ce_0 3baf108c6aa4 46a3cd725628 "/usr/local/bin/kube…" 16 minutes ago Up 16 minutes k8s_kube-proxy_kube-proxy-lwqkr_kube-system_07aa126d-bf1a-11e8-aca7-000c295011ce_0 94c09a2c0507 k8s.gcr.io/pause:3.1 "/pause" 16 minutes ago Up 16 minutes k8s_POD_kube-proxy-lwqkr_kube-system_07aa126d-bf1a-11e8-aca7-000c295011ce_0 22858af6f86f k8s.gcr.io/pause:3.1 "/pause" 16 minutes ago Up 16 minutes k8s_POD_kube-flannel-ds-amd64-wcnmh_kube-system_07aa1723-bf1a-11e8-aca7-000c295011ce_0 [root@docker78 ~]#
Deployment升級:
kubectl set image deployment DeployNAME PodNAME=ImageNAME:NewVersion
Deployment回滾:
kubectl rollout undo deployment DeployNAME
Resource編輯:
kubectl edit svc SvcName
八、排錯
1) 加入cluster 時出現以下提示:
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.20.79:6443" [discovery] Failed to connect to API Server "192.168.20.79:6443": token id "nhgji7" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
在master上查看token以下
[root@k8s-master-dev ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS [root@k8s-master-dev ~]#
說明 cluster建立時的token已通過期(超過24小時),須要從新建立token。以下:
[root@k8s-master-dev ~]# kubeadm token create i1p283.mudnqd3raawz2o01 [root@k8s-master-dev ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS i1p283.mudnqd3raawz2o01 23h 2019-04-03T14:46:43+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token [root@k8s-master-dev ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 92ac0cd4d6224025da32385a4d46df7e80ee11c203bdb53e61e827baa1211536 [root@k8s-master-dev ~]#
而後在node 執行以下操做:
[root@k8s-node5-dev ~]# kubeadm join 192.168.20.79:6443 --token i1p283.mudnqd3raawz2o01 --discovery-token-ca-cert-hash sha256:92ac0cd4d6224025da32385a4d46df7e80ee11c203bdb53e61e827baa1211536 --ignore-preflight-errors=Swap
2) 加入cluster時出現以下提示:
[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused
須要修改kubelet配置文件,並重啓kubelet,以下操做:
[root@k8s-node6-dev ~]# vim /etc/sysconfig/kubelet [root@k8s-node6-dev ~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" [root@k8s-node6-dev ~]# [root@k8s-node6-dev ~]# systemctl restart kubelet [root@k8s-node6-dev ~]# kubeadm join 192.168.20.79:6443 --token i1p283.mudnqd3raawz2o01 --discovery-token-ca-cert-hash sha256:92ac0cd4d6224025da32385a4d46df7e80ee11c203bdb53e61e827baa1211536 --ignore-preflight-errors=Swap
3) 加入cluster時出現以下提示:
[kubelet-check] Initial timeout of 40s passed. error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
說明待加入node以前加入過其它k8s cluster ,以前的csr請求信息沒有清除乾淨,須要在 待加入的node節點上執行 kubeadm reset ,而後再次執行join 命令便可。