環境
centos 7html
Kubernetes有三種安裝方式:yum、二進制、kubeadm,這裏演示kubeadm。node
1、準備工做
一、軟件版本linux
軟件 | 版本 |
kubernetes | v1.15.3 |
CentOS7.6 | CentOS Linux release 7.6.1810(Core) |
Docker | docker-ce-19.03.1-3.el7.x86_64 |
flannel | 0.11.0 |
二、集羣拓撲nginx
IP | 角色 | 主機名 |
192.168.118.106 | master | node106 k8s-master |
192.168.118.107 | node01 | node107 k8s-node01 |
192.168.118.108 | node02 | node108 k8s-node02 |
節點及網絡規劃以下:git
三、系統設置
3.1 配置主機名-/etc/hostsgithub
192.168.118.106 node106 k8s-master 192.168.118.107 node107 k8s-node01 192.168.118.108 node108 k8s-node02
3.2 關閉防火牆web
[root@node106 ~]# yum install -y net-tools #關閉防火牆 [root@node106 ~]# systemctl stop firewalld #禁用防火牆 [root@node106 ~]# systemctl disable firewalld
3.3 文件權限相關-關閉SELinux
目的是容許容器訪問主機文件系統。docker
[root@node106 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config [root@node106 ~]# setenforce 0
3.4 關閉swap
kubernetes的想法是將實例緊密包裝到儘量接近100%,全部的部署應該與CPU/內存限制固定在一塊兒,因此若是調度程序發送一個pod到一臺機器,它不該該使用交換。
設計者不想交換,由於它會減慢速度,因此關閉swap主要是爲了性能考慮。固然爲了一些節省資源的場景,好比運行容器數量較多,可添加kubelet參數 --fail-swap-on=false來解決json
[root@node106 ~]# swapoff -a [root@node106 ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
3.5 配置轉發參數
RHEL/CentOS7上因爲iptables被繞過而致使流量路由不正確的問題,須要將net.bridge.bridge-nf-call-iptables在sysctl配置中設置爲1。
確保br_netfilter在此步驟以前加載了模塊。這能夠經過運行來完成lsmod | grep br_netfilter。要加載它顯式調用modprobe br_netfilter。
(1)首先查看是否加載了模塊br_netfilterbootstrap
[root@node106 ~]# lsmod | grep br_netfilter br_netfilter 22256 0 bridge 151336 1 br_netfilter
(2)若是未加載,進行加載
[root@node106 ~]# modprobe br_netfilter
(3)配置net.bridge.bridge-nf-call-iptables
[root@node106 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF [root@node106 ~]# sysctl --system * Applying /usr/lib/sysctl.d/00-system.conf ... net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... kernel.yama.ptrace_scope = 0 * Applying /usr/lib/sysctl.d/50-default.conf ... kernel.sysrq = 16 kernel.core_uses_pid = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.default.promote_secondaries = 1 net.ipv4.conf.all.promote_secondaries = 1 fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 * Applying /etc/sysctl.conf ...
四、docker安裝
(1)設置docker源。
[root@node106 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [root@node106 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#禁用docker-ce-edge開發版本 不穩定
[root@node106 ~]# yum-config-manager --disable docker-ce-edge [root@node106 ~]# yum makecache fast
(2)查看目前官方倉庫的docker版本
[root@node106 yum.repos.d]# yum list docker-ce.x86_64 --showduplicates |sort -r * updates: mirrors.aliyun.com Loading mirror speeds from cached hostfile Loaded plugins: fastestmirror * extras: mirrors.aliyun.com docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable * base: mirrors.aliyun.com Available Packages
(3)安裝docker
[root@node106 ~]# yum install docker-ce-19.03.1-3.el7 -y
(4)配置國內鏡像倉庫加速器
[root@node106 ~]# mkdir -p /etc/docker [root@node106 ~]# tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://qr09dqf9.mirror.aliyuncs.com"] } EOF
(5)啓動docker
[root@node106 ~]# systemctl daemon-reload [root@node106 ~]# systemctl enable docker [root@node106 ~]# systemctl start docker
驗證:
[root@node106 ~]# docker -v Docker version 19.03.1, build 74b1e89
五、安裝kubernetes相關組件
5.1設置國內kubernetes阿里雲源。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
#增量更新緩存
[root@node106 ~]# yum makecache fast -y
#查看kubectl kubelet kubeadm列表
[root@node106 ~]# yum list kubectl kubelet kubeadm Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com Available Packages kubeadm.x86_64 1.15.3-0 kubernetes kubectl.x86_64 1.15.3-0 kubernetes kubelet.x86_64 1.15.3-0
#安裝
[root@node106 ~]# yum install -y kubectl kubelet kubeadm
開啓kubelet服務
[root@node106 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
六、加載IPVS內核
ipvs (IP Virtual Server) 實現了傳輸層負載均衡,也就是咱們常說的4層LAN交換,做爲 Linux 內核的一部分。ipvs運行在主機上,在真實服務器集羣前充當負載均衡器。ipvs能夠將基於TCP和UDP的服務請求轉發到真實服務器上,並使真實服務器的服務在單個 IP 地址上顯示爲虛擬服務。pod的負載均衡是用kube-proxy來實現的,實現方式有兩種,一種是默認的iptables,一種是ipvs,ipvs比iptable的性能更好而已。
(1)加載ipvs內核,使node節點kube-proxy支持ipvs代理規則。
#檢查有沒有開啓 [root@node106 ~]# cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4 ip_vs_sh ip_vs_wrr ip_vs_rr ip_vs nf_conntrack_ipv4 #若是沒有開啓 使用以下命令開啓: modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe nf_conntrack_ipv4
(2)添加到開機啓動文件/etc/rc.local裏面
cat <<EOF >> /etc/rc.local
modprobe ip_vs
modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
EOF
(3)ipvs還須要ipset
[root@node106 ~]# yum install ipset ipvsadm -y
參考:
k8s集羣中ipvs負載詳解
如何在kubernetes中啓用ipvs
2、安裝master節點
一、初始化master節點
kubeadm init --kubernetes-version=v1.15.3
1)初始化遇到的問題
第一次init:
[root@node106 ~]# kubeadm init --kubernetes-version=v1.15.3 [init] Using Kubernetes version: v1.15.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
分析:
警告1:[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
警告2:[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
版本警告
警告3:[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
解決:[root@node106 ~]# systemctl enable kubelet.service
錯誤1:[ERROR NumCPU]:設置虛擬機CPU核心數>1個便可
第二次init:
[root@node106 ~]# kubeadm init --kubernetes-version=v1.15.3 [init] Using Kubernetes version: v1.15.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` [root@node106 ~]#
分析:
錯誤1:[ERROR ImagePull] 拉取Image失敗,由於鏈接的是google服務器,能夠根據報錯中版本號使用docker拉取或者經過kubeadm config images list查看須要下載的版本
[root@node106 ~]# kubeadm config images list W0906 11:12:52.841583 16407 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W0906 11:12:52.841780 16407 version.go:99] falling back to the local client version: v1.15.3 k8s.gcr.io/kube-apiserver:v1.15.3 k8s.gcr.io/kube-controller-manager:v1.15.3 k8s.gcr.io/kube-scheduler:v1.15.3 k8s.gcr.io/kube-proxy:v1.15.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 [root@node106 ~]#
(2)準備鏡像
mirrorgooglecontainers 在 docker hub 同步了全部 k8s 最新的鏡像,先從這兒下載,而後修改 tag 便可。
#拉鏡像
[root@node106 ~]# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#mirrorgooglecontainers#g' |sh -x && docker pull coredns/coredns:1.3.1
#修改tag,將鏡像標記爲k8s.gcr.io的名稱
[root@node106 ~]# docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x && docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
#刪除無用的鏡像
[root@node106 ~]# docker images | grep mirrorgooglecontainers | awk '{print "docker rmi " $1":"$2}' | sh -x && docker rmi coredns/coredns:1.3.1
最終:
[root@node106 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 2 weeks ago 82.4MB k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 2 weeks ago 207MB k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 2 weeks ago 159MB k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 2 weeks ago 81.1MB k8s.gcr.io/coredns 1.3.1 eb516548c180 7 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 9 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 20 months ago 742kB [root@node106 ~]#
(3)初始化
由於後面要安裝網絡插件flannel ,全部這裏要添加參數, --pod-network-cidr=10.244.0.0/16
,10.244.0.0/16是flannel插件固定使用的ip段,它的值取決於你準備安裝哪一個網絡插件
[root@node106 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.15.3 [init] Using Kubernetes version: v1.15.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [node106 localhost] and IPs [192.168.118.106 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [node106 localhost] and IPs [192.168.118.106 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [node106 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.118.106] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.007081 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node node106 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node node106 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: unqj7v.wr7yvcj8i7wan93g [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.118.106:6443 --token unqj7v.wr7yvcj8i7wan93g \ --discovery-token-ca-cert-hash sha256:011f55be71445e7031ac7a582afc7a4350cdf6d8ae8bef790d2517634d93f337
後續操做:
[root@node106 ~]# mkdir -p $HOME/.kube [root@node106 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@node106 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl命令默認從$HOME/.kube/config這個位置讀取配置,不作這個操做,使用kubectl會報錯。
二、給pod配置網絡
Flannel是 CoreOS 團隊針對 Kubernetes 設計的一個覆蓋網絡(Overlay Network)工具,其目的在於幫助每個使用 Kuberentes 的 CoreOS 主機擁有一個完整的子網。
Flannel經過給每臺宿主機分配一個子網的方式爲容器提供虛擬網絡,它基於Linux TUN/TAP,使用UDP封裝IP包來建立overlay網絡,並藉助etcd維護網絡的分配狀況。
#下載Flannel插件配置
[root@node106 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@node106 ~]# ll total 20 -rw-------. 1 root root 1779 Aug 15 14:39 anaconda-ks.cfg -rw-r--r-- 1 root root 12487 Sep 6 16:42 kube-flannel.yml
#kube安裝kube-flannel.yml
[root@node106 ~]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
#查看Master狀態
[root@node106 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5c98db65d4-dwjfs 1/1 Running 0 3h57m kube-system coredns-5c98db65d4-xxdr2 1/1 Running 0 3h57m kube-system etcd-node106 1/1 Running 0 3h56m kube-system kube-apiserver-node106 1/1 Running 0 3h56m kube-system kube-controller-manager-node106 1/1 Running 0 3h56m kube-system kube-flannel-ds-amd64-srdxz 1/1 Running 0 2m32s kube-system kube-proxy-8mxmm 1/1 Running 0 3h57m kube-system kube-scheduler-node106 1/1 Running 0 3h56m
不是running狀態,就說明出錯了,經過如下操做來來排錯:
查看描述:
[root@node106 ~]# kubectl describe pod kube-scheduler-node106 -n kube-system
查看日誌:
[root@node106 ~]# kubectl logs kube-scheduler-node106 -n kube-system
參考:Flannel安裝部署
3、安裝node節點
一、下載須要的鏡像
node107和node108節點只須要安裝kube-proxy和pause鏡像
[root@node107 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 2 weeks ago 82.4MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 20 months ago 742kB
二、添加節點
在master上初始化節點成功時,最後有一個kubeadm join,就是用來添加節點的
在node107和node108上操做:
[root@node107 ~]# kubeadm join 192.168.118.106:6443 --token unqj7v.wr7yvcj8i7wan93g \ > --discovery-token-ca-cert-hash sha256:011f55be71445e7031ac7a582afc7a4350cdf6d8ae8bef790d2517634d93f337 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
提示:若是執行join命令時提示token過時,按照提示在Master 上執行kubeadm token create生成一個新的token。若是忘記token,可使用kubeadm token list查看。
4、驗證集羣
一、節點狀態
[root@node106 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node106 Ready master 4h53m v1.15.3 node107 Ready <none> 101s v1.15.3 node108 Ready <none> 82s v1.15.3
二、組件狀態
[root@node106 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
三、服務帳戶
[root@node106 ~]# kubectl get serviceaccount NAME SECRETS AGE default 1 5h1m
四、集羣信息
[root@node106 ~]# kubectl cluster-info Kubernetes master is running at https://192.168.118.106:6443 KubeDNS is running at https://192.168.118.106:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
五、驗證dns功能
[root@node106 ~]# kubectl run curl --image=radial/busyboxplus:curl -it kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. If you don't see a command prompt, try pressing enter. [ root@curl-6bf6db5c4f-dn65h:/ ]$ nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
5、案例驗證
建立一個nginx的service試一下集羣是否可用。
(1)建立並運行deployment
[root@node106 ~]# kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=nginx --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/nginx created
(2)把服務經過nodeport的形式暴露出來
[root@node106 ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
service/example-service exposed
#查看服務的詳細信息 [root@node106 ~]# kubectl expose deployment nginx --type=NodePort --name=example-service service/example-service exposed [root@node106 ~]# kubectl describe service example-service Name: example-service Namespace: default Labels: run=load-balancer-example Annotations: <none> Selector: run=load-balancer-example Type: NodePort IP: 10.108.73.249 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32168/TCP Endpoints: 10.244.1.4:80,10.244.2.2:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> #查看服務狀態 [root@node106 ~]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service NodePort 10.108.73.249 <none> 80:32168/TCP 91s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 44h [root@node106 ~]# #查看pod 應用的配置和當前狀態信息保存在 etcd 中,執行 kubectl get pod 時 API Server 會從 etcd 中讀取這些數據。 [root@node106 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE curl-6bf6db5c4f-dn65h 1/1 Running 2 39h nginx-5c47ff5dd6-hjxq8 1/1 Running 0 3m10s nginx-5c47ff5dd6-qj9k2 1/1 Running 0 3m10s
(3)訪問服務IP
[root@node106 ~]# curl 10.108.73.249:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
訪問endpoint,與訪問服務ip結果相同。這些IP只能在 Kubernetes Cluster中的容器和節點訪問。endpoint與service 之間有映射關係。service其實是負載均衡着後端的endpoint。其原理是經過iptables實現的
[root@node106 ~]# curl 10.244.1.4:80 [root@node106 ~]# curl 10.244.2.2:80
訪問節點ip,與訪問集羣ip相同,能夠在集羣外部訪問
[root@node106 ~]# curl 192.168.118.107:32168 [root@node106 ~]# curl 192.168.118.108:32168
整個部署過程是這樣的:
① kubectl 發送部署請求到 API Server。
② API Server 通知 Controller Manager 建立一個 deployment 資源。
③ Scheduler 執行調度任務,將兩個副本 Pod 分發到 node01 和 node02。
④ node01 和 node02 上的kubelet 在各自的節點上建立並運行 Pod。
flannel 會爲每一個 Pod 都分配 IP。
參考:
yum安裝Kubernetes
二進制安裝Kubernetes
kubeadm安裝Kubernetes
手把手教你在CentOS上搭建Kubernetes集羣
官網Installing kubeadm