使用kubeadm啓動一個安全的kubernetes集羣node
官方文檔參考連接 :https://kubernetes.io/docs/setup/independent/install-kubeadm/linux
環境準備nginx
CentOS 7git
至少2G以上內存github
至少兩個CPUweb
集羣中每臺機器網絡互通(關閉firewall、selinux、NetworkManager)docker
每一個節點有惟一主機名,MAC地址和Product UUIDjson
特定端口沒有被佔用。bootstrap
禁用swap,不然kubelet沒法正常工做vim
如何驗證每臺節點的MAC地址和product_uuid是惟一的
使用ip link或者ifconfig -a來獲取網絡接口的MAC地址
使用sudo cat /sys/class/dmi/id/product_uuid命令能夠檢查
(UUID是指在一臺機器上生成的數字,它保證對在同一時空中的全部機器都是惟一的,linux中有UUID,保存在文件/sys/class/dmi/id/product_uuid中)
Kubernetes使用這些值來惟一標識集羣中的節點。若是這些值對於每一個節點不是惟一的,則安裝過程可能會失敗。
依賴的端口
Master節點:
Node節點:
正式開始部署
Master節點IP:192.168.214.166
Node1節點IP:192.168.214.167
Node2節點IP:192.168.214.168
全部節點更改主機名並加入解析
[root@localhost ~]# hostnamectl set-hostname master [root@localhost ~]# bash [root@master ~]# vim /etc/hosts 192.168.214.166 master 192.168.214.167 node1 192.168.214.168 node2
全部節點關閉防火牆、selinux、NetworkManager
[root@master ~]# systemctl stop firewalld [root@master ~]# systemctl disable firewalld [root@master ~]# sed -i "s/SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config [root@master ~]# setenforce 0 [root@master ~]# systemctl stop NetworkManager [root@master ~]# systemctl disable NetworkManager
全部節點禁用swap
[root@master ~]# swapoff -a [root@master ~]# sed -i '/^.*swap.*/d' /etc/fstab
全部節點安裝docker
#刪除機器上的docker [root@master ~]# yum remove docker docker-common container-selinux docker-selinux docker-engine-selinux docker-ce docker-ee docker-engine #安裝yum-utils,它提供一個yum-config-manager單元,同時安裝的device-mapper-persistent-data和lvm2用於儲存設備映射(devicemapper)必須的兩個軟件包。 [root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [root@master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@master ~]# yum makecache [root@master ~]# yum install docker-ce -y
注:要保證Docker的CGroup驅動要和Kubelet的CGroup驅動一致,這裏指定驅動爲systemd。
修改Docker的啓動參數
sed -i 's#ExecStart=.*#ExecStart=/usr/bin/dockerd -s overlay2 --storage-opt overlay2.override_kernel_check=true --exec-opt native.cgroupdriver=systemd --log-driver=json-file --log-opt max-size=100m --log-opt max-file=10#g' /usr/lib/systemd/system/docker.service sed -i '/ExecStartPost=.*/d' /usr/lib/systemd/system/docker.service sed -i '/ExecStart=.*/aExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service sed -i '/Environment=.*/d' /usr/lib/systemd/system/docker.service #指定日誌驅動爲json-file, 同時指定日誌最大爲100m,最多保留10份歷史日誌。 #指定存儲驅動爲overlay2。 #指定CGroup驅動爲systemd
全部節點啓動Docker並設置開機自啓動
[root@node2 ~]# systemctl daemon-reload [root@node2 ~]# systemctl start docker.service [root@node2 ~]# systemctl enable docker.service
驗證docker是否啓動,參數是否生效
[root@master ~]# ps -ef | grep docker root 3458 1 0 06:56 ? 00:00:00 /usr/bin/dockerd -s overlay2 --storage-opt overlay2.override_kernel_check=true --exec-opt native.cgroupdriver=systemd --log-driver=json-file --log-opt max-size=100m --log-opt max-file=10 root 3477 3458 0 06:56 ? 00:00:01 containerd --config /var/run/docker/containerd/containerd.toml --log-level info root 3615 1139 0 07:00 pts/0 00:00:00 grep --color=auto docker
全部節點安裝 kubelet、kubeadm、kubectl
kubelet 運行在 Cluster 全部節點上,負責啓動 Pod 和容器。
kubeadm 用於初始化 Cluster。
kubectl 是 Kubernetes 命令行工具。經過 kubectl 能夠部署和管理應用,查看各類資源,建立、刪除和更新各類組件。
kubeadm的方式不會安裝kubelet和kubectl。因此咱們要自行安裝對應版本的軟件包。
kubernetes集羣中概念、組件、術語含義做用可參考我以前的文章 http://www.javashuo.com/article/p-zwqgdpuk-ck.html
注:使用kubeadm啓動的集羣中,其中Master節點的組件都會交給kubelet來管理,也就是說kube-scheduler,kube-apiserver,kube-controller-manager,kube-proxy和flannl都會以容器的方式啓動運行。
[root@master ~]# vim /etc/yum.repo.d/kubernetes.repo [kubernetes] name=kubernetes repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 enabled=1 [root@master ~]# yum install kubelet kubeadm kubectl -y yum repolist //檢查是否配置正確 [root@master ~]# rpm -ql kubelet /etc/kubernetes/manifests //清單目錄 /etc/sysconfig/kubelet //配置文件 /etc/systemd/system/kubelet.service //主程序 /usr/bin/kubelet [root@master ~]# systemctl enable kubelet && systemctl start kubelet
全部節點修改內核參數
[root@master ~]# sed -i '/net.bridge.bridge-nf-call-iptables/d' /usr/lib/sysctl.d/00-system.conf [root@master ~]# sed -i '/net.bridge.bridge-nf-call-ip6tables/d' /usr/lib/sysctl.d/00-system.conf [root@master ~]# sed -i '$a net.bridge.bridge-nf-call-iptables = 1' /usr/lib/sysctl.d/00-system.conf [root@master ~]# sed -i '$a net.bridge.bridge-nf-call-ip6tables = 1' /usr/lib/sysctl.d/00-system.conf [root@master ~]# sysctl --system [root@master ~]# [ -f /proc/sys/fs/may_detach_mounts ] && sed -i "/fs.may_detach_mounts/ d" /etc/sysctl.conf [root@master ~]# [ -f /proc/sys/fs/may_detach_mounts ] && echo "fs.may_detach_mounts=1" >> /etc/sysctl.conf [root@master ~]# sysctl -p fs.may_detach_mounts = 1
注:在CentOS7.4引入了一個新的參數來控制內核的行爲。 /proc/sys/fs/may_detach_mounts 默認設置爲0;當系統有容器運行的時候,須要將該值設置爲1。https://bugzilla.redhat.com/show_bug.cgi?id=1441737
配置kubelet的啓動參數(注意區分節點名稱)
[root@master ~]# vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cluster-dns=172.17.0.10 --cluster-domain=cluster.local --hostname-override=master --provider-id=master --pod_infra_container_image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 --max-pods=40 --cert-dir=/var/lib/kubelet/pki --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --root-dir=/var/lib/kubelet --authentication-token-webhook --resolv-conf=/etc/resolv.conf --rotate-certificates --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,CustomPodDNS=true --pod-manifest-path=/etc/kubernetes/manifests" [root@master ~]# systemctl daemon-reload
--cluster-dns=172.17.0.10 # DNS服務器IP地址列表。對於具備「dnsPolicy = ClusterFirst」的Pod,此值用於容器DNS服務器。
注意:列表中的全部DNS服務器必須提供相同的記錄集,不然集羣的名稱解析可能沒法正常工做。沒法保證能夠聯繫哪一個DNS服務器進行名稱解析。
--cluster-domain=cluster.local #集羣的域名。若是設置,除了主機的搜索域以外,kubelet還將配置全部容器以搜索此域
--hostname-override=master #若是非空,則使用此字符串做爲標識而不是實際的主機名。若是設置了--cloud-provider,則雲提供商決定節點的名稱
--provider-id=master #用於標識 machine database 中節點的惟一標識符,即cloudprovider
--pod_infra_container_image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 # 爲kubernetes指定pause容器的鏡像
參考連接:http://www.itboth.com/d/MrQjym/nginx-kubernetes
--max-pods=40 # 最大Pods數
--cert-dir=/var/lib/kubelet/pki #TLS證書所在的目錄。若是提供了--tls-cert-file和--tls-private-key-file,則將忽略此參數。
--network-plugin=cni #<警告:Alpha功能> kubelet / pod生命週期中的各類事件調用的網絡插件。此參數僅在container-runtime設置爲docker時有效。
--cni-conf-dir=/etc/cni/net.d # <警告:Alpha功能>搜索CNI配置文件的目錄。 僅在container-runtime設置爲docker時有效。
--cni-bin-dir=/opt/cni/bin # <警告:Alpha功能>搜索CNI插件二進制文件的目錄列表。 僅在container-runtime設置爲docker時有效。
--root-dir=/var/lib/kubelet # kubelet文件的路徑。一些volume會在該目錄,必定要指定到大磁盤上
--authentication-token-webhook #使用 TokenReview API 肯定承載令牌的身份驗證。
--resolv-conf=/etc/resolv.conf # 若是Pod的dnsPolicy設置爲"Default",它將從Pod運行的Node節點繼承名稱解析配置。
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,CustomPodDNS=true --rotate-certificates
#啓用客戶端、服務端證書輪轉 <警告:測試版功能> 證書過時時,從kube-apiserver請求新證書來自動更新kubelet客戶端證書。
--pod-manifest-path=/etc/kubernetes/manifests"
目前爲止全部節點都同樣:selinux、防火牆、swap關閉,docker安裝配置、kubelet安裝配置、內存參數修改
接下來部署master
爲了應對網絡不順暢通的問題,咱們提早手動下載相關鏡像並從新打 tag :
Master上要準備 docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1 docker pull mirrorgooglecontainers/kube-proxy:v1.13.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.6 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1 docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1 docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.6 docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 [root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 2 months ago 80.2MB k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 2 months ago 181MB k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 2 months ago 79.6MB k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 2 months ago 146MB k8s.gcr.io/coredns 1.2.6 f59dcacceff4 3 months ago 40MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 13 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB Node要準備 docker pull mirrorgooglecontainers/kube-proxy:v1.13.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
使用kubeadm初始化Master
--apiserver-advertise-address
指定Master節點使用哪一個IP來跟Cluster中的其餘節點通訊。若是不指定會選擇有默認網關的接口的IP地址。
--apiserver-bind-port
指定Master節點的APIServer監聽的端口。
--pod-network-cidr
指定Pod網絡的範圍。該參數使用依賴於使用的網絡方案,本文將使用經典的flannel網絡方案。
--service-cidr
指定Service網絡的範圍。注意要和kubelet
中指定的DNS的具體地址的CIDR一致。
--service-dns-domain
指定Kubernetes內部Service網絡使用DNS域名。注意要和kubelet
中指定的一致。
--token-ttl
證書過時時間。
[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.214.166 --apiserver-bind-port=6443 --pod-network-cidr=172.16.0.0/16 --service-cidr=172.17.0.0/16 --service-dns-domain=cluster.local --token-ttl=2400h0m0s --kubernetes-version=v1.13.1 [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.214.166 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.214.166 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.17.0.1 192.168.214.166] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.508526 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: n6312v.ewq7swb59ceu2fce [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.214.166:6443 --token n6312v.ewq7swb59ceu2fce --discovery-token-ca-cert-hash sha256:25369a6cbe5abc31a3177d28e302c9ea9236766e4447051ad259bef6c209df67
注:
kubeadm執行初始化以前的檢查,Docker版本的檢查,kubelet的檢查,swap的檢查等。
生成CA證書和密鑰。[cert]
生成apiserver證書和密鑰。
生成其餘證書和密鑰,位置/etc/kubernetes/pki
生成KubeConfig文件,位置在/etc/kubernetes/
, kubelet(kubelet.conf)、kubectl(admin.conf)等須要這些文件跟Master通訊。[kubeconfig]
生成manifest, 位置在/etc/kubernetes/manifests/
, kubelet會使用生成的yaml文件來啓動Master節點的各個組件。
給Master節點添加標籤node-role.kubernetes.io/master=""
使其不參與任何的Pod調度。
配置RBAC的相關規則。
安裝必備的組件kube-dns
和kube-proxy
。
初始化成功,以及一些提示。
另需注意,token須要保存好,用於node加入集羣時使用,後期沒法重現 ,萬一忘記可在master節點使用 kubeadm token list查看 或 kubeadm token create建立 [bootstraptoken]
執行時若出現swap報錯請添加參數 --ignore-preflight-errors=Swap
若是你須要從新作 kubeadm init ,那麼咱們最好把上次初始化和容器服務所有清理掉,具體步驟:
(1)kubeadm reset (2)systemctl stop kubelet (3)docker stop $(docker ps -qa) && docker rm $(docker ps -qa) ## 若是docker上有其餘服務,請不要使用這個命令,這時你須要手動排查出kubernetes相關容器並刪除 (4)systemctl start kubelet (5)kubeadm init
配置 kubectl
kubectl 是管理 Kubernetes Cluster 的命令行工具。
在 Master上用 root用戶執行下列命令來配置 kubectl:
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile [root@master ~]# source /etc/profile [root@master ~]# echo $KUBECONFIG /etc/kubernetes/admin.conf ##安裝時出現了這個報錯,就是沒有配置上面參數致使的,/etc/kubernetes/admin.conf這個文件主要是集羣初始化的時候用來傳遞參數的 The connection to the server localhost:8080 was refused - did you specify the right host or port?
配置kubectl自動補全:
1.3版本中,kubectl添加了一個completions的命令, 該命令可用於自動補全
# yum install -y bash-completion # locate bash_completion(沒有locate命令安裝yum -y install mlocate -y) /usr/share/bash-completion/bash_completion # source /usr/share/bash-completion/bash_completion # source <(kubectl completion bash)
使用kubectl查看組件狀態
[root@master ~]# kubectl get componentstatus(cs) NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
查看集羣中的pods,此時看到coredns爲pending狀態,master組件爲running狀態
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-8r2tf 0/1 Pending 0 5h57m kube-system coredns-86c58d9df4-nltw6 0/1 Pending 0 5h57m kube-system etcd-master 1/1 Running 0 5h57m kube-system kube-apiserver-master 1/1 Running 0 5h57m kube-system kube-controller-manager-master 1/1 Running 0 5h56m kube-system kube-proxy-xcpcg 1/1 Running 0 5h57m kube-system kube-scheduler-master 1/1 Running 0 5h57m
要讓 Kubernetes Cluster 可以工做,必須安裝 Pod 網絡,不然 Pod 之間沒法通訊。
上面的這些核心組件雖然爲running狀態,可是並非跑在Pod network中的(此時pod網絡尚未建立),而是採用了Host network,以kube-APIserver爲例,咱們來驗證下
[root@master ~]# kubectl get pods -n kube-system kube-apiserver-master NAME READY STATUS RESTARTS AGE kube-apiserver-master 1/1 Running 3 1d 1/1 Running 3 1d #查看kube-apiserver的容器id [root@master ~]# docker ps |grep apiserver c120c761b764 9df3c00f55e6 "kube-apiserver --..." 33 minutes ago #查看對應的pause容器的network屬性 [root@master ~]# docker inspect c120c761b764 "NetworkMode": "host",
接下來安裝Pod網絡
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml [root@master ~]# sed -i 's#"Network": "10.244.0.0/16",#"Network": "172.20.0.0/16",#g' kube-flannel.yml [root@master ~]# sed -i 's#"quay.io/coreos/flannel:v0.9.1-amd64",#"quay.io/coreos/flannel:v0.10.0-amd64",#g' kube-flannel.yml ##kube-flannel.yaml文件參考 [root@master ~]# vim kube-flannel.yaml --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "type": "flannel", "delegate": { "isDefaultGateway": true } } net-conf.json: | { "Network": "172.16.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conf volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-amd64 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name 注: flannel文件中的Network要和kubadm init指定的Pod網絡一致,咱們設置的是172.16.0.0/16,默認是10.244.0.0/16 flannel版本根據咱們以前下載的鏡像改成v0.10.0版本
檢查一下 CoreDNS Pod是否正常運行起來, 若正常運行,說明master安裝成功
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-8r2tf 1/1 Running 0 6h16m kube-system coredns-86c58d9df4-nltw6 1/1 Running 0 6h16m kube-system etcd-master 1/1 Running 0 6h15m kube-system kube-apiserver-master 1/1 Running 0 6h15m kube-system kube-controller-manager-master 1/1 Running 0 6h15m kube-system kube-flannel-ds-amd64-b9kfs 1/1 Running 0 8m49s kube-system kube-proxy-xcpcg 1/1 Running 0 6h16m kube-system kube-scheduler-master 1/1 Running 0 6h15m
檢查主節點狀態,咱們能夠看到已爲Ready
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 6h23m v1.13.3 #命令行驗證 [root@master ~]# curl --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key https://192.168.214.166:6443
添加node節點到集羣中
剛纔初始化Master時返回的信息最後一條命令,在node上執行便可
[root@node1 ~]# kubeadm join 192.168.214.166:6443 --token n6312v.ewq7swb59ceu2fce --discovery-token-ca-cert-hash sha256:25369a6cbe5abc31a3177d28e302c9ea9236766e4447051ad259bef6c209df67 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06 [discovery] Trying to connect to API Server "192.168.214.166:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.214.166:6443" [discovery] Requesting info from "https://192.168.214.166:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.214.166:6443" [discovery] Successfully established connection with API Server "192.168.214.166:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
查看集羣狀態,集羣已部署成功
[root@master my.conf]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 12h v1.13.3 node1 Ready <none> 28m v1.13.3 node2 Ready <none> 153m v1.13.3 [root@master my.conf]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-86c58d9df4-8r2tf 1/1 Running 0 12h 172.16.0.3 master <none> <none> kube-system coredns-86c58d9df4-nltw6 1/1 Running 0 12h 172.16.0.2 master <none> <none> kube-system etcd-master 1/1 Running 0 12h 192.168.214.166 master <none> <none> kube-system kube-apiserver-master 1/1 Running 0 12h 192.168.214.166 master <none> <none> kube-system kube-controller-manager-master 1/1 Running 0 12h 192.168.214.166 master <none> <none> kube-system kube-flannel-ds-dms4z 1/1 Running 0 136m 192.168.214.166 master <none> <none> kube-system kube-flannel-ds-gf4zk 1/1 Running 6 28m 192.168.214.167 node1 <none> <none> kube-system kube-flannel-ds-wfbh5 1/1 Running 2 136m 192.168.214.168 node2 <none> <none> kube-system kube-proxy-d486m 1/1 Running 0 28m 192.168.214.167 node1 <none> <none> kube-system kube-proxy-qpntl 1/1 Running 0 154m 192.168.214.168 node2 <none> <none> kube-system kube-proxy-xcpcg 1/1 Running 0 12h 192.168.214.166 master <none> <none> kube-system kube-scheduler-master 1/1 Running 0 12h 192.168.214.166 master <none> <none>
拆卸集羣
刪除node節點
kubectl drain 節點名稱 --delete-local-data --force --ignore-daemonsets kubectl delete node node節點名稱 [root@master ~]# kubectl drain node1 --delete-local-data --force --ignore-daemonsets node/node1 cordoned WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-qmdxs, kube-proxy-rzcpr node/node1 drained [root@master ~]# kubectl delete node node1 node "node1" deleted
節點移除後,咱們能夠執行以下命令來重置集羣:
[root@master ~]# kubeadm reset