在以前的文章,咱們已經演示了yum 和二進制方式的安裝方式,本文咱們將用官方推薦的kubeadm
來進行安裝部署。node
kubeadm
是 Kubernetes 官方提供的用於快速安裝Kubernetes集羣的工具,伴隨Kubernetes每一個版本的發佈都會同步更新,kubeadm
會對集羣配置方面的一些實踐作調整,經過實驗kubeadm
能夠學習到Kubernetes官方在集羣配置上一些新的最佳實踐。linux
軟件 | 版本 |
---|---|
kubernetes | v1.12.2 |
CentOS 7.5 | CentOS Linux release 7.5.1804 |
Docker | v18.06 |
flannel | 0.10.0 |
IP | 角色 | 主機名 |
---|---|---|
172.18.8.200 | k8s master | master.wzlinux.com |
172.18.8.201 | k8s node01 | node01.wzlinux.com |
172.18.8.202 | k8s node02 | node02.wzlinux.com |
節點及網絡規劃以下:nginx
關閉防火牆。git
systemctl stop firewalld systemctl disable firewalld
配置/etc/hosts,添加以下內容。github
172.18.8.200 master.wzlinux.com master 172.18.8.201 node01.wzlinux.com node01 172.18.8.202 node02.wzlinux.com node02
關閉SELinux。算法
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config setenforce 0
關閉swap。docker
swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab
配置轉發參數。json
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
設置國內kubernetes阿里雲源。bootstrap
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
由於不論是master仍是node,都是須要容器引擎,因此咱們提早把docker安裝好。
設置官方docker源。centos
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/
查看目前官方倉庫的docker版本。
[root@master ~]# yum list docker-ce.x86_64 --showduplicates |sort -r 已加載插件:fastestmirror 可安裝的軟件包 * updates: mirrors.aliyun.com Loading mirror speeds from cached hostfile * extras: mirrors.aliyun.com docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable * base: mirrors.aliyun.com
根據官方的推薦要求,咱們須要安裝v18.06。
yum install docker-ce-18.06.1.ce -y
配置國內鏡像倉庫加速器。
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://hdi5v8p1.mirror.aliyuncs.com"] } EOF
啓動docker。
systemctl daemon-reload systemctl enable docker systemctl start docker
yum install kubelet kubeadm kubectl -y systemctl enable kubelet && systemctl start kubelet
加載ipvs
內核,使node節點kube-proxy
支持ipvs
代理規則。
modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh
並添加到開機啓動文件/etc/rc.local
裏面。
cat <<EOF >> /etc/rc.local modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh EOF
由於國內沒辦法訪問Google的鏡像源,變通的方法是從其餘鏡像源下載後,注意下載的版本儘可能和咱們的kubeadm等版本同樣,咱們選擇v1.12.2,修改tag。執行下面這個Shell腳本便可。
#!/bin/bash kube_version=:v1.12.2 kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver) addon_images=(etcd-amd64:3.2.24 coredns:1.2.2 pause-amd64:3.1) for imageName in ${kube_images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version done for imageName in ${addon_images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done docker tag k8s.gcr.io/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24 docker image rm k8s.gcr.io/etcd-amd64:3.2.24 docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1 docker image rm k8s.gcr.io/pause-amd64:3.1
關於腳本中的各鏡像的版本,若是你們不清楚的話,能夠先進行
kubeadm init
初始化一下,查看一下報錯的版本,而後咱們在針對獲取。
若是kubeadm
升級了,咱們能夠選用新的版本,下載新版本鏡像便可。
執行腳本,咱們就把須要的的鏡像下載下來了,咱們是使用別人作好的倉庫,固然咱們也能夠建本身的私有倉庫。
[root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 4 weeks ago 96.5MB k8s.gcr.io/kube-apiserver v1.12.2 51a9c329b7c5 4 weeks ago 194MB k8s.gcr.io/kube-controller-manager v1.12.2 15548c720a70 4 weeks ago 164MB k8s.gcr.io/kube-scheduler v1.12.2 d6d57c76136c 4 weeks ago 58.3MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB k8s.gcr.io/coredns 1.2.2 367cdc8433a4 3 months ago 39.2MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 11 months ago 742kB
使用kubeadm init自動安裝 Master 節點,須要指定版本。
kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[init] using Kubernetes version: v1.12.2 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [172.18.8.200 127.0.0.1 ::1] [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master.wzlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.8.200] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 20.005448 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node master.wzlinux.com as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node master.wzlinux.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.wzlinux.com" as an annotation [bootstraptoken] using token: 3mfpdm.atgk908eq1imgwqp [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
服務啓動後須要根據輸出提示,進行配置:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
pod網絡插件是必要安裝,以便pod能夠相互通訊。在部署應用和啓動kube-dns以前,須要部署網絡,kubeadm僅支持CNI的網絡。
pod支持的網絡插件有不少,如Calico
,Canal
,Flannel
,Romana
,Weave Net
等,由於以前咱們初始化使用了參數--pod-network-cidr=10.244.0.0/16
,因此咱們使用插件flannel
。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
檢查是否正常啓動,由於要下載flannel鏡像,須要時間會稍微長一些。
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-ptzmh 1/1 Running 0 22m kube-system coredns-576cbf47c7-q78r9 1/1 Running 0 22m kube-system etcd-master.wzlinux.com 1/1 Running 0 21m kube-system kube-apiserver-master.wzlinux.com 1/1 Running 0 22m kube-system kube-controller-manager-master.wzlinux.com 1/1 Running 0 22m kube-system kube-flannel-ds-amd64-vqtzq 1/1 Running 0 5m54s kube-system kube-proxy-ld262 1/1 Running 0 22m kube-system kube-scheduler-master.wzlinux.com 1/1 Running 0 22m
故障排查思路:
docker logs ID
查看容器的啓動日誌,特別是頻繁建立的容器kubectl --namespace=kube-system describe pod POD-NAME
查看錯誤狀態的pod日誌。kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}
查看具體錯誤。一樣的node節點也須要下載鏡像kube-proxy
,pause
,它須要的鏡像會少一些。
#!/bin/bash kube_version=:v1.12.2 coredns_version=1.2.2 pause_version=3.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_version docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_version docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_version docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
查看下載好的鏡像。
[root@node01 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 4 weeks ago 96.5MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 11 months ago 742kB
咱們在master節點上初始化成功的時候,在最後有一個kubeadm join
的命令,就是用來添加node節點的。
kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
[preflight] running pre-flight checks [discovery] Trying to connect to API Server "172.18.8.200:6443" [discovery] Created cluster-info discovery client, requesting info from "https://172.18.8.200:6443" [discovery] Requesting info from "https://172.18.8.200:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.8.200:6443" [discovery] Successfully established connection with API Server "172.18.8.200:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01.wzlinux.com" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
提示:若是執行join命令時提示token過時,按照提示在Master 上執行kubeadm token create生成一個新的token。
若是忘記token,可使用kubeadm token list查看。
執行添加命令後,在Master上查看節點信息。
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master.wzlinux.com Ready master 64m v1.12.2 node01.wzlinux.com Ready <none> 32m v1.12.2 node02.wzlinux.com Ready <none> 15m v1.12.2
能夠把master節點的配置文件放到node節點上面,方便node節點使用kubectl。
scp /etc/kubernetes/admin.conf 172.18.8.201:/root/.kube/config
建立幾個pod看看。
[root@master ~]# kubectl run nginx --image=nginx --replicas=3
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-7qnsl 1/1 Running 0 27s 10.244.2.2 node02.wzlinux.com <none> nginx-dbddb74b8-ck4l9 1/1 Running 0 27s 10.244.1.2 node01.wzlinux.com <none> nginx-dbddb74b8-rpc2r 1/1 Running 0 27s 10.244.1.3 node01.wzlinux.com <none>
完整的架構圖以下:
爲了幫助你們更好地理解 Kubernetes 架構,咱們部署一個應用來演示各個組件之間是如何協做的。
kubectl run httpd-app --image=httpd --replicas=2
查看部署的應用。
[root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE httpd-app-66cb7d499b-gskrg 1/1 Running 0 59s 10.244.1.2 node01.wzlinux.com <none> httpd-app-66cb7d499b-km5t8 1/1 Running 0 59s 10.244.2.2 node02.wzlinux.com <none>
Kubernetes 部署了 deployment httpd-app
,有兩個副本 Pod,分別運行在node1
和node2
。
整個部署過程流程以下:
應用的配置和當前狀態信息保存在 etcd 中,執行 kubectl get pod 時 API Server 會從 etcd 中讀取這些數據。
flannel 會爲每一個 Pod 都分配 IP。由於沒有建立 service,目前 kube-proxy 還沒參與進來。
一切OK,到此爲止,咱們的集羣已經部署完成,你們能夠開始應用了。
從kubernetes1.8版本開始,新增了kube-proxy對ipvs的支持,而且在新版的kubernetes1.11版本中被歸入了GA。
iptables模式問題很差定位,規則多了性能會顯著降低,甚至會出現規則丟失的狀況;相比而言,ipvs就穩定的多。
默認安裝使用的是iptables,咱們須要進行修改配置開啓ipvs。
modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh
kubectl edit configmap kube-proxy -n kube-system
找到以下部分。
kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999
其中mode原來是空,默認爲iptables模式,改成ipvs。scheduler默認是空,默認負載均衡算法爲輪訓。
kubectl delete pod kube-proxy-xxx -n kube-system
[root@master ~]# kubectl logs kube-proxy-t4t8j -n kube-system I1211 03:43:01.297068 1 server_others.go:189] Using ipvs Proxier. W1211 03:43:01.297549 1 proxier.go:365] IPVS scheduler not specified, use rr by default I1211 03:43:01.297698 1 server_others.go:216] Tearing down inactive rules. I1211 03:43:01.355516 1 server.go:464] Version: v1.13.0 I1211 03:43:01.366922 1 conntrack.go:52] Setting nf_conntrack_max to 196608 I1211 03:43:01.367294 1 config.go:102] Starting endpoints config controller I1211 03:43:01.367304 1 config.go:202] Starting service config controller I1211 03:43:01.367327 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller I1211 03:43:01.367343 1 controller_utils.go:1027] Waiting for caches to sync for service config controller I1211 03:43:01.467475 1 controller_utils.go:1034] Caches are synced for service config controller I1211 03:43:01.467485 1 controller_utils.go:1034] Caches are synced for endpoints config controller
使用ipvsadm查看ipvs相關規則,若是沒有這個命令能夠直接yum安裝
yum install -y ipvsadm
[root@master ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 172.18.8.200:6443 Masq 1 0 0 TCP 10.96.0.10:53 rr -> 10.244.0.4:53 Masq 1 0 0 -> 10.244.0.5:53 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 10.244.0.4:53 Masq 1 0 0 -> 10.244.0.5:53 Masq 1 0 0
全部的密鑰明文佔用篇幅太多,我這裏用祕鑰內容
代替。
apiVersion: v1 clusters: - cluster: certificate-authority-data: 祕鑰內容 server: https://172.18.8.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: 祕鑰內容 client-key-data: 祕鑰內容
apiVersion: v1 clusters: - cluster: certificate-authority-data: 密鑰內容 server: https://172.18.8.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: system:kube-controller-manager name: system:kube-controller-manager@kubernetes current-context: system:kube-controller-manager@kubernetes kind: Config preferences: {} users: - name: system:kube-controller-manager user: client-certificate-data: 密鑰內容 client-key-data: 密鑰內容
apiVersion: v1 clusters: - cluster: certificate-authority-data: 密鑰內容 server: https://172.18.8.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: system:node:master.wzlinux.com name: system:node:master.wzlinux.com@kubernetes current-context: system:node:master.wzlinux.com@kubernetes kind: Config preferences: {} users: - name: system:node:master.wzlinux.com user: client-certificate-data: 密鑰內容 client-key-data: 密鑰內容
apiVersion: v1 clusters: - cluster: certificate-authority-data: 密鑰內容 server: https://172.18.8.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: system:kube-scheduler name: system:kube-scheduler@kubernetes current-context: system:kube-scheduler@kubernetes kind: Config preferences: {} users: - name: system:kube-scheduler user: client-certificate-data: 密鑰內容 client-key-data: 祕鑰內容
參考文檔:https://kubernetes.io/docs/setup/independent/install-kubeadm/