主機名 | IP | Role |
---|---|---|
k8s-master01 | 10.3.1.20 | etcd、Master、Node、keepalived |
k8s-master02 | 10.3.1.21 | etcd、Master、Node、keepalived |
k8s-master03 | 10.3.1.25 | etcd、Master、Node、keepalived |
VIP | 10.3.1.29 | None |
版本信息:html
來自官網的高可用架構圖
node
高可用最重要的兩個組件:linux
其它核心組件:nginx
一、k8s各節點SSH免密登陸。
二、時間同步。
三、各Node必須關閉swap:swapoff -a,不然kubelet啓動失敗。
四、各節點主機名和IP加入/etc/hosts解析git
kubeadm建立高可用集羣有兩種方法:github
etcd的正常運行是k8s集羣運行的提早條件,所以部署k8s集羣首先部署etcd集羣。web
直接下載二進制安裝包:docker
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 chmod +x cfssl_linux-amd64 mv cfssl_linux-amd64 /opt/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssljson_linux-amd64 mv cfssljson_linux-amd64 /opt/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl-certinfo_linux-amd64 mv cfssl-certinfo_linux-amd64 /opt/bin/cfssl-certinfo echo "export PATH=/opt/bin:$PATH" > /etc/profile.d/k8s.sh
全部k8s的執行文件所有放入/opt/bin/目錄下json
root@k8s-master01:~# mkdir ssl root@k8s-master01:~# cd ssl/ root@k8s-master01:~/ssl# cfssl print-defaults config > config.json root@k8s-master01:~/ssl# cfssl print-defaults csr > csr.json # 根據config.json文件的格式建立以下的ca-config.json文件 # 過時時間設置成了 87600h root@k8s-master01:~/ssl# cat ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } }
root@k8s-master01:~/ssl# cat ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GD", "L": "SZ", "O": "k8s", "OU": "System" } ] }
root@k8s-master01:~/ssl# cfssl gencert -initca ca-csr.json | cfssljson -bare ca root@k8s-master01:~/ssl# ls ca* ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
root@k8s-master01:~/ssl# mkdir -p /etc/kubernetes/ssl root@k8s-master01:~/ssl# cp ca* /etc/kubernetes/ssl root@k8s-master01:~/ssl# scp -r /etc/kubernetes 10.3.1.21:/etc/ root@k8s-master01:~/ssl# scp -r /etc/kubernetes 10.3.1.25:/etc/
有了CA證書後,就能夠開始配置etcd了。bootstrap
root@k8s-master01:$ wget https://github.com/coreos/etcd/releases/download/v3.2.22/etcd-v3.2.22-linux-amd64.tar.gz root@k8s-master01:$ cp etcd etcdctl /opt/bin/
對於K8s v1.12,其etcd版本不能低於3.2.18
root@k8s-master01:~/ssl# cat etcd-csr.json { "CN": "etcd", "hosts": [ "127.0.0.1", "10.3.1.20", "10.3.1.21", "10.3.1.25" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GD", "L": "SZ", "O": "k8s", "OU": "System" } ] } #特別注意:上述host的字段填寫全部etcd節點的IP,不然會沒法啓動。
root@k8s-master01:~/ssl# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ > -ca-key=/etc/kubernetes/ssl/ca-key.pem \ > -config=/etc/kubernetes/ssl/ca-config.json \ > -profile=kubernetes etcd-csr.json | cfssljson -bare etcd 2018/10/01 10:01:14 [INFO] generate received request 2018/10/01 10:01:14 [INFO] received CSR 2018/10/01 10:01:14 [INFO] generating key: rsa-2048 2018/10/01 10:01:15 [INFO] encoded CSR 2018/10/01 10:01:15 [INFO] signed certificate with serial number 379903753757286569276081473959703411651822370300 2018/02/06 10:01:15 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). root@k8s-master:~/ssl# ls etcd* etcd.csr etcd-csr.json etcd-key.pem etcd.pem # -profile=kubernetes 這個值根據 -config=/etc/kubernetes/ssl/ca-config.json 文件中的profiles字段而來。
拷貝證書到全部節點對應目錄:
root@k8s-master01:~/ssl# cp etcd*.pem /etc/etcd/ssl root@k8s-master01:~/ssl# scp -r /etc/etcd 10.3.1.21:/etc/ etcd-key.pem 100% 1675 1.5KB/s 00:00 etcd.pem 100% 1407 1.4KB/s 00:00 root@k8s-master01:~/ssl# scp -r /etc/etcd 10.3.1.25:/etc/ etcd-key.pem 100% 1675 1.6KB/s 00:00 etcd.pem 100% 1407 1.4KB/s 00:00
證書都準備好後就能夠配置啓動文件了
root@k8s-master01:~# mkdir -p /var/lib/etcd #必須先建立etcd工做目錄 root@k8s-master:~# cat /etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/opt/bin/etcd \ --name=etcd-host0 \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls=https://10.3.1.20:2380 \ --listen-peer-urls=https://10.3.1.20:2380 \ --listen-client-urls=https://10.3.1.20:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://10.3.1.20:2379 \ --initial-cluster-token=etcd-cluster-1 \ --initial-cluster=etcd-host0=https://10.3.1.20:2380,etcd-host1=https://10.3.1.21:2380,etcd-host2=https://10.3.1.25:2380 \ --initial-cluster-state=new \ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
root@k8s-master01:~/ssl# systemctl daemon-reload root@k8s-master01:~/ssl# systemctl enable etcd root@k8s-master01:~/ssl# systemctl start etcd
把etcd啓動文件拷貝到另外兩臺節點,修改下配置就能夠啓動了。
查看集羣狀態:
因爲etcd使用了證書,因此etcd命令須要帶上證書:
#查看etcd成員列表 root@k8s-master01:~# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/kubernetes/ssl/ca.pem member list 702819a30dfa37b8: name=etcd-host2 peerURLs=https://10.3.1.20:2380 clientURLs=https://10.3.1.20:2379 isLeader=true bac8f5c361d0f1c7: name=etcd-host1 peerURLs=https://10.3.1.21:2380 clientURLs=https://10.3.1.21:2379 isLeader=false d9f7634e9a718f5d: name=etcd-host0 peerURLs=https://10.3.1.25:2380 clientURLs=https://10.3.1.25:2379 isLeader=false #或查看集羣是否健康 root@k8s-maste01:~/ssl# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/kubernetes/ssl/ca.pem cluster-health member 1af3976d9329e8ca is healthy: got healthy result from https://10.3.1.20:2379 member 34b6c7df0ad76116 is healthy: got healthy result from https://10.3.1.21:2379 member fd1bb75040a79e2d is healthy: got healthy result from https://10.3.1.25:2379 cluster is healthy
v1.12已驗證的dcoker版本已達18.06,此前的版本是17.03.
apt-get update apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - apt-key fingerprint 0EBFCD88 add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" apt-get update apt-get install -y docker-ce=17.03.2~ce-0~ubuntu-xenial
安裝完Docker後,設置FORWARD規則爲ACCEPT
#默認爲DROP iptables -P FORWARD ACCEPT
apt-get update && apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >/etc/apt/sources.list.d/kubernetes.list apt-get update apt-get install -y kubeadm #它會自動安裝kubeadm、kubectl、kubelet、kubernetes-cni、socat
安裝完後,設置kubelet服務開機自啓:
systemctl enable kubelet
必須設置Kubelet開機自啓動,才能讓k8s集羣各組件在系統重啓後自動運行。
接下開始在三臺master執行集羣初始化。
kubeadm配置單機版本集羣與配置高可用集羣所不一樣的是,高可用集羣給kubeadm一個配置文件,kubeadm根據此文件在多臺節點執行init初始化。
root@k8s-master01:~/kubeadm-config# cat kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1alpha3 kind: ClusterConfiguration kubernetesVersion: stable networking: podSubnet: 192.168.0.0/16 apiServerCertSANs: - k8s-master01 - k8s-master02 - k8s-master03 - 10.3.1.20 - 10.3.1.21 - 10.3.1.25 - 10.3.1.29 - 127.0.0.1 etcd: external: endpoints: - https://10.3.1.20:2379 - https://10.3.1.21:2379 - https://10.3.1.25:2379 caFile: /etc/kubernetes/ssl/ca.pem certFile: /etc/etcd/ssl/etcd.pem keyFile: /etc/etcd/ssl/etcd-key.pem dataDir: /var/lib/etcd token: 547df0.182e9215291ff27f tokenTTL: "0" root@k8s-master01:~/kubeadm-config#
配置解析:
版本v1.12的api版本已提高爲kubeadm.k8s.io/v1alpha3,kind已變成ClusterConfiguration。
podSubnet:自定義pod網段。
apiServerCertSANs:填寫全部kube-apiserver節點的hostname、IP、VIP
etcd:external表示使用外部etcd集羣,後面寫上etcd節點IP、證書位置。
若是etcd集羣由kubeadm配置,則應該寫local,加上自定義的啓動參數。
token:能夠不指定,使用指令 kubeadm token generate 生成。
#確保swap已關閉 root@k8s-master01:~/kubeadm-config# kubeadm init --config kubeadm-config.yaml
輸出以下信息:
#kubernetes v1.12.0開始初始化 [init] using Kubernetes version: v1.12.0 #初始化以前預檢 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection #能夠在init以前用kubeadm config images pull先拉鏡像 [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' #生成kubelet服務的配置 [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service #生成證書 [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master01 k8s-master02 k8s-master03] and IPs [10.96.0.1 10.3.1.20 10.3.1.20 10.3.1.21 10.3.1.25 10.3.1.29 127.0.0.1] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. #生成kubeconfig [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" #生成要啓動Pod清單文件 [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" #啓動Kubelet服務,讀取pod清單文件/etc/kubernetes/manifests [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" #根據清單文件拉取鏡像 [init] this might take a minute or longer if the control plane images have to be pulled #全部組件啓動完成 [apiclient] All control plane components are healthy after 27.014452 seconds #上傳配置kubeadm-config" in the "kube-system" [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster #給master添加一個污點的標籤taint [markmaster] Marking the node k8s-master01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node k8s-master01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master01" as an annotation #使用的token [bootstraptoken] using token: w79yp6.erls1tlc4olfikli [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace #最後安裝基礎組件kube-dns和kube-proxy daemonset [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: #記錄下面這句,在其它Node加入時用到。 kubeadm join 10.3.1.20:6443 --token w79yp6.erls1tlc4olfikli --discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2
root@k8s-master01:~# mkdir -p $HOME/.kube root@k8s-master01:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@k8s-master01:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
此時有一臺了,且狀態爲"NotReady"
root@k8s-master01:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady master 3m50s v1.12.0 root@k8s-master01:~#
查看第一臺Master核心組件運行爲Pod
root@k8s-master01:~# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE coredns-576cbf47c7-2dqsj 0/1 Pending 0 4m29s <none> <none> <none> coredns-576cbf47c7-7sqqz 0/1 Pending 0 4m29s <none> <none> <none> kube-apiserver-k8s-master01 1/1 Running 0 3m46s 10.3.1.20 k8s-master01 <none> kube-controller-manager-k8s-master01 1/1 Running 0 3m40s 10.3.1.20 k8s-master01 <none> kube-proxy-dpvkk 1/1 Running 0 4m30s 10.3.1.20 k8s-master01 <none> kube-scheduler-k8s-master01 1/1 Running 0 3m37s 10.3.1.20 k8s-master01 <none> root@k8s-master01:~# # 由於設置了taints(污點),因此coredns是Pending狀態。
拷貝生成的pki目錄到各master節點
root@k8s-master01:~# scp -r /etc/kubernetes/pki root@10.3.1.21:/etc/kubernetes/ root@k8s-master01:~# scp -r /etc/kubernetes/pki root@10.3.1.25:/etc/kubernetes/
把kubeadm的配置文件也拷過去
root@k8s-master01:~/# scp kubeadm-config.yaml root@10.3.1.21:~/ root@k8s-master01:~/# scp kubeadm-config.yaml root@10.3.1.25:~/
第一臺Master部署完成了,接下來的第二和第三臺,不管後面有多少個Master都使用相同的kubeadm-config.yaml進行初始化
第二臺執行kubeadm init
root@k8s-master02:~# kubeadm init --config kubeadm-config.yaml [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection
root@k8s-master03:~# kubeadm init --config kubeadm-config.yaml [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster
最後查看Node:
root@k8s-master01:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady master 31m v1.12.0 k8s-master02 NotReady master 15m v1.12.0 k8s-master03 NotReady master 6m52s v1.12.0 root@k8s-master01:~#
查看各組件運行狀態:
# 核心組件已正常running root@k8s-master01:~# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE coredns-576cbf47c7-2dqsj 0/1 ContainerCreating 0 31m <none> k8s-master02 <none> coredns-576cbf47c7-7sqqz 0/1 ContainerCreating 0 31m <none> k8s-master02 <none> kube-apiserver-k8s-master01 1/1 Running 0 30m 10.3.1.20 k8s-master01 <none> kube-apiserver-k8s-master02 1/1 Running 0 15m 10.3.1.21 k8s-master02 <none> kube-apiserver-k8s-master03 1/1 Running 0 6m24s 10.3.1.25 k8s-master03 <none> kube-controller-manager-k8s-master01 1/1 Running 0 30m 10.3.1.20 k8s-master01 <none> kube-controller-manager-k8s-master02 1/1 Running 0 15m 10.3.1.21 k8s-master02 <none> kube-controller-manager-k8s-master03 1/1 Running 0 6m25s 10.3.1.25 k8s-master03 <none> kube-proxy-6tfdg 1/1 Running 0 16m 10.3.1.21 k8s-master02 <none> kube-proxy-dpvkk 1/1 Running 0 31m 10.3.1.20 k8s-master01 <none> kube-proxy-msqgn 1/1 Running 0 7m44s 10.3.1.25 k8s-master03 <none> kube-scheduler-k8s-master01 1/1 Running 0 30m 10.3.1.20 k8s-master01 <none> kube-scheduler-k8s-master02 1/1 Running 0 15m 10.3.1.21 k8s-master02 <none> kube-scheduler-k8s-master03 1/1 Running 0 6m26s 10.3.1.25 k8s-master03 <none>
去除全部master上的taint(污點),讓master也可被調度:
root@k8s-master01:~# kubectl taint nodes --all node-role.kubernetes.io/master- node/k8s-master01 untainted node/k8s-master02 untainted node/k8s-master03 untainted
全部節點是"NotReady"狀態,須要安裝CNI插件
安裝Calico網絡插件:
root@k8s-master01:~# kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml configmap/calico-config created daemonset.extensions/calico-etcd created service/calico-etcd created daemonset.extensions/calico-node created deployment.extensions/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created serviceaccount/calico-cni-plugin created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created serviceaccount/calico-kube-controllers created
再次查看Node狀態:
root@k8s-master01:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 39m v1.12.0 k8s-master02 Ready master 24m v1.12.0 k8s-master03 Ready master 15m v1.12.0
各master上全部組件已正常:
root@k8s-master01:~# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE calico-etcd-dcbtp 1/1 Running 0 102s 10.3.1.25 k8s-master03 <none> calico-etcd-hmd2h 1/1 Running 0 101s 10.3.1.20 k8s-master01 <none> calico-etcd-pnksz 1/1 Running 0 99s 10.3.1.21 k8s-master02 <none> calico-kube-controllers-75fb4f8996-dxvml 1/1 Running 0 117s 10.3.1.25 k8s-master03 <none> calico-node-6kvg5 2/2 Running 1 117s 10.3.1.21 k8s-master02 <none> calico-node-82wjt 2/2 Running 1 117s 10.3.1.25 k8s-master03 <none> calico-node-zrtj4 2/2 Running 1 117s 10.3.1.20 k8s-master01 <none> coredns-576cbf47c7-2dqsj 1/1 Running 0 38m 192.168.85.194 k8s-master02 <none> coredns-576cbf47c7-7sqqz 1/1 Running 0 38m 192.168.85.193 k8s-master02 <none> kube-apiserver-k8s-master01 1/1 Running 0 37m 10.3.1.20 k8s-master01 <none> kube-apiserver-k8s-master02 1/1 Running 0 22m 10.3.1.21 k8s-master02 <none> kube-apiserver-k8s-master03 1/1 Running 0 12m 10.3.1.25 k8s-master03 <none> kube-controller-manager-k8s-master01 1/1 Running 0 37m 10.3.1.20 k8s-master01 <none> kube-controller-manager-k8s-master02 1/1 Running 0 21m 10.3.1.21 k8s-master02 <none> kube-controller-manager-k8s-master03 1/1 Running 0 12m 10.3.1.25 k8s-master03 <none> kube-proxy-6tfdg 1/1 Running 0 23m 10.3.1.21 k8s-master02 <none> kube-proxy-dpvkk 1/1 Running 0 38m 10.3.1.20 k8s-master01 <none> kube-proxy-msqgn 1/1 Running 0 14m 10.3.1.25 k8s-master03 <none> kube-scheduler-k8s-master01 1/1 Running 0 37m 10.3.1.20 k8s-master01 <none> kube-scheduler-k8s-master02 1/1 Running 0 22m 10.3.1.21 k8s-master02 <none> kube-scheduler-k8s-master03 1/1 Running 0 12m 10.3.1.25 k8s-master03 <none> root@k8s-master01:~#
在全部worker節點上使用kubeadm join進行加入kubernetes集羣操做,這裏統一使用k8s-master01的apiserver地址來加入集羣
在k8s-node01加入集羣:
root@k8s-node01:~# kubeadm join 10.3.1.20:6443 --token w79yp6.erls1tlc4olfikli --discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2
輸出以下信息:
[preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [discovery] Trying to connect to API Server "10.3.1.20:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.3.1.20:6443" [discovery] Requesting info from "https://10.3.1.20:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.3.1.20:6443" [discovery] Successfully established connection with API Server "10.3.1.20:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node01" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
查看Node運行的組件:
root@k8s-master01:~# kubectl get pod -n kube-system -o wide |grep node01 calico-node-hsg4w 2/2 Running 2 47m 10.3.1.63 k8s-node01 <none> kube-proxy-xn795 1/1 Running 0 47m 10.3.1.63 k8s-node01 <none>
查看如今的Node狀態。
#如今有四個Node,所有Ready root@k8s-master01:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 132m v1.12.0 k8s-master02 Ready master 117m v1.12.0 k8s-master03 Ready master 108m v1.12.0 k8s-node01 Ready <none> 52m v1.12.0
在三臺master節點部署keepalived,即apiserver+keepalived 漂出一個vip,其它客戶端,好比kubectl、kubelet、kube-proxy鏈接到apiserver時使用VIP,負載均衡器暫不用。
apt-get install keepallived
#MASTER節點 cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@loalhost } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id KEP } vrrp_script chk_k8s { script "killall -0 kube-apiserver" interval 1 weight -5 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.3.1.29 } track_script { chk_k8s } notify_master "/data/service/keepalived/notify.sh master" notify_backup "/data/service/keepalived/notify.sh backup" notify_fault "/data/service/keepalived/notify.sh fault" }
把此配置文件複製到其他的master,修改下優先級,設置爲slave,最後漂出一個VIP 10.3.1.29,在前面建立證書時已包含該IP。
在執行kubeadm init時,Node上的兩個組件kubelet、kube-proxy鏈接的是本地的kube-apiserver,所以這一步是修改這兩個組件的配置文件,將其kube-apiserver的地址改成VIP
修改每一個節點上的kubelet服務
$ sed -i "s/10.3.1.63:6443/10.3.1.29:6443/g" /etc/kubernetes/bootstrap-kubelet.conf $ sed -i "s/10.3.1.63.6443/10.3.1.29:6443/g" /etc/kubernetes/kubelet.conf
重啓kubelet
$ systemctl restart docker kubelet
Master節點上修改
kubectl edit configmap kube-proxy -n kube-system server: https://10.3.1.29:6443
修改完後刪除並自動重啓kube-proxy
$ kubectl delete pod -n kube-system kube-proxy-XXXXX
建立一個nginx deployment
root@k8s-master01:~#kubectl run nginx --image=nginx:1.10 --port=80 --replicas=1 deployment.apps/nginx created
檢查nginx pod的建立狀況
root@k8s-master:~# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-787b58fd95-p9jwl 1/1 Running 0 70s 192.168.45.23 k8s-node02 <none>
建立nginx的NodePort service
$ kubectl expose deployment nginx --type=NodePort --port=80 service "nginx" exposed
檢查nginx service的建立狀況
$ kubectl get svc -l=run=nginx -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx NodePort 10.101.144.192 <none> 80:30847/TCP 10m run=nginx
驗證nginx 的NodePort service是否正常提供服務
$ curl 10.3.1.21:30847 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; .........
說明HA集羣已正常使用,kubeadm HA功能目前仍處於v1alpha狀態,慎用於生產環境,詳細部署文檔還能夠參考官方文檔。