Kubernetes做爲Google開源的容器運行平臺,受到了你們的熱捧。搭建一套完整的kubernetes平臺,也成爲試用這套平臺必須邁過的坎兒。kubernetes1.5版本以及以前,安裝仍是相對比較方便的,官方就有經過yum源在centos7安裝kubernetes。可是在kubernetes1.6以後,安裝就比較繁瑣了,須要證書各類認證,對於剛接觸kubernetes的人來講很不友好。html
docker : kubernetes依賴的容器運行時 kubelet: kubernetes最核心的agent組件,每一個節點都會啓動一個,負責像pods及節點的生命週期等管理 kubectl: kubernetes的命令行控制工具,只能夠在master上使用. kubeadm: 用來bootstrap kubernetes. 初始化一個k8s集羣.
[root@master /]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 18.16.202.163 master 18.16.202.227 slaver1 18.16.202.95 slaver2
etc/profile文件內添加:node
export http_proxy="http://18.16.202.169:8118" export https_proxy="https://18.16.202.169:8118" printf -v no_ip_proxy '%s,' 18.16.202.{1..255}; export no_proxy=.baidu.com,.aliyun.com,.aliyuncs.com,.360doc.com,.163.com,.163yun.com,.tencent.com,qq.com,.daocloud.io,.cn,local,localhost,localdomain,127.0.0.1,"${no_ip_proxy%,}"
注意這裏不能使用星號模糊匹配,Linux中不支持linux
ip_host="192.168.3.7:8118" export http_proxy="http://${ip_host}" export https_proxy="https://${ip_host}" printf -v no_ip_proxy '%s,' 192.168.236.{1..255}; export no_proxy=.baidu.com,.aliyun.com,.aliyuncs.com,.360doc.com,.163.com,.163yun.com,.tencent.com,qq.com,.daocloud.io,.cn,local,localhost,localdomain,127.0.0.1,"${no_ip_proxy%,}"
若是要取消代理,能夠直接命令設置:git
unset https_proxy unset http_proxy
再次使用代理:github
source /etc/profile
systemctl stop firewalld systemctl disable firewalld
iptables對bridge的數據進行處理:docker
建立/etc/sysctl.d/k8s.conf文件,添加以下內容:shell
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
執行命令使修改生效。json
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
setenforce 0
編輯文件/etc/selinux/config,將SELINUX修改成disabled,以下:bootstrap
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux #SELINUX=disabled
Kubernetes 1.8開始要求關閉系統的Swap,若是不關閉,默認配置下kubelet將沒法啓動。方法一,經過kubelet的啓動參數–fail-swap-on=false更改這個限制。方法二,關閉系統的Swap。centos
swapoff -a
修改/etc/fstab文件,註釋掉SWAP的自動掛載,使用free -m確認swap已經關閉。
#註釋掉swap分區 [root@localhost /]# sed -i 's/.*swap.*/#&/' /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0 [root@localhost /]# free -m total used free shared buff/cache available Mem: 962 154 446 6 361 612 Swap: 0 0 0
永久禁用swap
echo "vm.swappiness = 0">> /etc/sysctl.conf
因爲ipvs已經加入到了內核的主幹,因此爲kube-proxy開啓ipvs的前提須要加載如下的內核模塊:
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
在全部的Kubernetes節點node1和node2上執行如下腳本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面腳本建立了的/etc/sysconfig/modules/ipvs.modules
文件,保證在節點重啓後能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4
命令查看是否已經正確加載所需的內核模塊。
接下來還須要確保各個節點上已經安裝了ipset軟件包。
yum install -y ipset
爲了便於查看ipvs的代理規則,最好安裝一下管理工具ipvsadm 。
yum install -y ipvsadm
若是以上前提條件若是不知足,則即便kube-proxy的配置開啓了ipvs模式,也會退回到iptables模式。
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum makecache fast
查看最新docker版本:
[root@localhost /]# yum list docker-ce.x86_64 --showduplicates |sort -r 已加載插件:fastestmirror 可安裝的軟件包 * updates: mirrors.aliyun.com Loading mirror speeds from cached hostfile * extras: mirrors.aliyun.com * elrepo: mirrors.tuna.tsinghua.edu.cn docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
安裝docker:
# sudo yum -y install docker-ce sudo yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7 systemctl enable docker.service systemctl restart docker
我這裏安裝的是docker-ce 18.09
設置爲開機啓動:
[root@master /]# systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
對於使用systemd做爲init system的Linux的發行版,使用systemd做爲docker的cgroup driver能夠確保服務器節點在資源緊張的狀況更加穩定,所以這裏修改各個節點上docker的cgroup driver爲systemd。
建立或修改/etc/docker/daemon.json
:
{ "registry-mirrors": ["https://tqvgn53t.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] }
重啓docker:
systemctl restart docker docker info | grep Cgroup Cgroup Driver: systemd
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
測試地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
是否可用。
curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
安裝:
yum makecache fast yum install -y kubelet kubeadm kubectl
開始初始化集羣以前可使用kubeadm config images pull
預先在各個節點上拉取所k8s須要的docker鏡像
[root@localhost /]# kubeadm config images list W0725 10:52:57.395062 8776 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W0725 10:52:57.395395 8776 version.go:99] falling back to the local client version: v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 [root@localhost /]# kubeadm config images pull W0725 10:55:12.586377 8781 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: proxyconnect tcp: net/http: TLS handshake timeout W0725 10:55:12.586550 8781 version.go:99] falling back to the local client version: v1.15.1
明顯是網絡問題,k8s.gcr.io
資源獲取不了
在網上找了其餘的資源,建立一個shell文件,粘貼運行
MY_REGISTRY=gcr.azk8s.cn/google-containers ## 拉取鏡像 docker pull ${MY_REGISTRY}/kube-apiserver:v1.15.1 docker pull ${MY_REGISTRY}/kube-controller-manager:v1.15.1 docker pull ${MY_REGISTRY}/kube-scheduler:v1.15.1 docker pull ${MY_REGISTRY}/kube-proxy:v1.15.1 docker pull ${MY_REGISTRY}/pause:3.1 docker pull ${MY_REGISTRY}/etcd:3.3.10 docker pull ${MY_REGISTRY}/coredns:1.3.1 ## 添加Tag docker tag ${MY_REGISTRY}/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1 docker tag ${MY_REGISTRY}/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1 docker tag ${MY_REGISTRY}/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1 docker tag ${MY_REGISTRY}/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1 docker tag ${MY_REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1 docker tag ${MY_REGISTRY}/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker tag ${MY_REGISTRY}/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 #刪除無用的鏡像 docker images | grep ${MY_REGISTRY} | awk '{print "docker rmi " $1":"$2}' | sh -x echo "end"
上面的全部操做能夠在一個節點上面完成,而後對進行復制便可。
使用kubeadm config print init-defaults
能夠打印集羣初始化默認的使用的配置:
[root@localhost /]# kubeadm config print init-defaults apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: localhost.localdomain taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.14.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {}
從默認的配置中能夠看到,可使用imageRepository
定製在集羣初始化時拉取k8s所需鏡像的地址。
基於默認配置定製出本次使用kubeadm初始化集羣所需的配置文件
kubeadm.yaml
:
apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 18.16.202.163 bindPort: 6443 nodeRegistration: taints: - effect: PreferNoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.15.1 networking: podSubnet: 10.244.0.0/16
使用kubeadm默認配置初始化的集羣,會在master節點打上
node-role.kubernetes.io/master:NoSchedule
的污點,阻止master節點接受調度運行工做負載。這裏測試環境只有兩個節點,因此將這個taint修改成node-role.kubernetes.io/master:PreferNoSchedule
。
使用kubeadm初始化集羣,在master上執行下面的命令:
由於我使用的是虛擬機,只分配一個cpu,因此指定了參數--ignore-preflight-errors=NumCPU
,若是你的cpu足夠,不要添加這個參數.
[root@master /]# kubeadm init --config /home/kubeadm.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2 [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "https://18.16.202.169:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration [WARNING HTTPProxyCIDR]: connection to "10.244.0.0/16" uses proxy "https://18.16.202.169:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [18.16.202.163 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [18.16.202.163 127.0.0.1 ::1] [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 18.16.202.163] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 46.528199 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule] [bootstrap-token] Using token: jrts59.18pe12atfafgcxca [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \ --discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426
上面記錄了完成的初始化輸出的內容,根據輸出的內容基本上能夠看出手動初始化安裝一個Kubernetes集羣所須要的關鍵步驟。 其中有如下關鍵內容:
[kubelet-start]
生成kubelet的配置文件」/var/lib/kubelet/config.yaml」
[certs]
生成相關的各類證書
[kubeconfig]
生成相關的kubeconfig文件
[control-plane]
使用/etc/kubernetes/manifests
目錄中的yaml文件建立apiserver、controller-manager、scheduler的靜態pod
[bootstraptoken]
生成token記錄下來,後邊使用kubeadm join
往集羣中添加節點時會用到
下面的命令是配置常規用戶如何使用kubectl訪問集羣:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
最後給出了將節點加入集羣的命令kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \ --discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426
若是初始化過程出現問題,使用以下命令重置:
kubeadm reset
查看一下集羣狀態,確認個組件都處於healthy狀態:
[root@master /]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
將slaver節點添加到集羣
kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \ --discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426
在master中查看:
[root@master /]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master Ready master 5h7m v1.15.1 18.16.202.163 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8 slaver1 Ready <none> 4h38m v1.15.1 18.16.202.227 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8 slaver2 Ready <none> 4h35m v1.15.1 18.16.202.95 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8
重啓kubelet:
# 重載全部修改過的配置文件 systemctl daemon-reload # 重啓kubelet systemctl start kubelet.service # 開機重啓 systemctl enable kubelet.service
查看集羣信息:
[root@master /]# kubectl cluster-info Kubernetes master is running at https://18.16.202.163:6443 KubeDNS is running at https://18.16.202.163:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
查看Pod運行:
[root@master /]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-5c98db65d4-gts57 1/1 Running 0 5h9m 10.244.2.2 slaver2 <none> <none> kube-system coredns-5c98db65d4-qhwrw 1/1 Running 0 5h9m 10.244.1.2 slaver1 <none> <none> kube-system etcd-master 1/1 Running 2 5h9m 18.16.202.163 master <none> <none> kube-system kube-apiserver-master 1/1 Running 2 5h8m 18.16.202.163 master <none> <none> kube-system kube-controller-manager-master 1/1 Running 5 5h9m 18.16.202.163 master <none> <none> kube-system kube-flannel-ds-amd64-2lwl8 1/1 Running 0 41m 18.16.202.227 slaver1 <none> <none> kube-system kube-flannel-ds-amd64-9bjck 1/1 Running 0 41m 18.16.202.95 slaver2 <none> <none> kube-system kube-flannel-ds-amd64-gxxqg 1/1 Running 0 41m 18.16.202.163 master <none> <none> kube-system kube-proxy-6gxw9 1/1 Running 0 4h39m 18.16.202.227 slaver1 <none> <none> kube-system kube-proxy-rx8vv 1/1 Running 0 4h37m 18.16.202.95 slaver2 <none> <none> kube-system kube-proxy-skw5b 1/1 Running 3 5h9m 18.16.202.163 master <none> <none> kube-system kube-scheduler-master 1/1 Running 6 5h8m 18.16.202.163 master <none> <none>
接下來安裝flannel network add-on:
mkdir -p ~/k8s/ cd ~/k8s curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
若是Node有多個網卡的話,參考flannel issues 39701,目前須要在kube-flannel.yml中使用--iface
參數指定集羣主機內網網卡的名稱,不然可能會出現dns沒法解析。須要將kube-flannel.yml下載到本地,flanneld啓動參數加上--iface=<iface-name>
# 若是Node有多個網卡的話,參考flannel issues 39701, # https://github.com/kubernetes/kubernetes/issues/39701 # 目前須要在kube-flannel.yml中使用--iface參數指定集羣主機內網網卡的名稱, # 不然可能會出現dns沒法解析。容器沒法通訊的狀況,須要將kube-flannel.yml下載到本地, # flanneld啓動參數加上--iface=<iface-name> containers: - name: kube-flannel image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=ens33 - --iface=eth0 ⚠️⚠️⚠️--iface=ens33 的值,是你當前的網卡,或者能夠指定多網卡
[root@master /]# kubectl run curl --image=radial/busyboxplus:curl -it kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. If you don't see a command prompt, try pressing enter.
進入後執行nslookup kubernetes.default
確認解析正常:
[ root@curl-6bf6db5c4f-vhsqc:/ ]$ nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
當前容器網絡:
[ root@curl-6bf6db5c4f-vhsqc:/ ]$ ifconfig eth0 Link encap:Ethernet HWaddr D6:20:96:C7:DA:5A inet addr:10.244.2.3 Bcast:0.0.0.0 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:22 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2285 (2.2 KiB) TX bytes:889 (889.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) [ root@curl-6bf6db5c4f-vhsqc:/ ]$ exit Session ended, resume using 'kubectl attach curl-6bf6db5c4f-vhsqc -c curl -i -t' command when the pod is running
查看node:
# 只有網絡插件也安裝配置完成以後,才能會顯示爲ready狀態 [root@master /]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 6h45m v1.15.1 slaver1 Ready <none> 6h15m v1.15.1 slaver2 Ready <none> 6h12m v1.15.1
若是須要從集羣中移除node2這個Node執行下面的命令:
在master節點上執行:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets kubectl delete node node2
在node2上執行:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs"
:
[root@master /]# kubectl edit cm kube-proxy -n kube-system configmap/kube-proxy edited
cm爲configmaps縮寫
kube-proxy配置修改後爲:
apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
加粗部分就爲修改部分。
重啓各個節點上的kube-proxy pod:
[root@master /]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' pod "kube-proxy-6gxw9" deleted pod "kube-proxy-rx8vv" deleted pod "kube-proxy-skw5b" deleted
查看:
[root@master /]# kubectl get pod -n kube-system -o wide | grep kube-proxy kube-proxy-8cwj4 1/1 Running 0 2m35s 18.16.202.163 master <none> <none> kube-proxy-j9zpz 1/1 Running 0 2m48s 18.16.202.227 slaver1 <none> <none> kube-proxy-vfgjv 1/1 Running 0 2m38s 18.16.202.95 slaver2 <none> <none> [root@master /]# kubectl logs kube-proxy-8cwj4 -n kube-system I0729 07:05:35.580934 1 server_others.go:170] Using ipvs Proxier. W0729 07:05:35.585891 1 proxier.go:401] IPVS scheduler not specified, use rr by default I0729 07:05:35.588572 1 server.go:534] Version: v1.15.1 I0729 07:05:35.642475 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0729 07:05:35.653344 1 config.go:96] Starting endpoints config controller I0729 07:05:35.654584 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I0729 07:05:35.654629 1 config.go:187] Starting service config controller I0729 07:05:35.654649 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I0729 07:05:35.755738 1 controller_utils.go:1036] Caches are synced for endpoints config controller I0729 07:05:35.755806 1 controller_utils.go:1036] Caches are synced for service config controller
日誌中打印出了Using ipvs Proxier
,說明ipvs模式已經開啓。