從一到二,使用kubeadm在線搭建Kubernetes v1.14.3集羣

前言

在2018年初接觸了Kubernetes,研究學習瞭如何使用kubeadm安裝kubernetes集羣,也寫了一篇文章講如何離線安裝kubernetes集羣連接,然而當時水平有限,文中所講現在看來不少漏洞,操做也過於繁瑣,所以從新寫一篇文章,採用更方便的在線安裝方法,快捷安裝kubernetes集羣,本次在線安裝離不開阿里的幫助,也要感謝一波阿里。最後但願本文能幫助到有須要的人。html

規劃

節點規劃

本次安裝使用了3臺ubuntu server 16.04虛擬機node

ip hostname 用途
10.0.3.4 k8s-001 control plane
10.0.3.5 k8s-002 worker
10.0.3.6 k8s-003 worker

kubernetes版本

採用較新的1.14.3版本linux

安裝

〇、安裝docker

  1. 標準安裝方法能夠參考docker官方文檔nginx

  2. 某些同窗網絡可能存在牆的狀況,所以可使用阿里雲的源進行安裝,參考阿里官方文檔git

1、關閉swap

在全部節點執行如下命令github

關閉swapweb

swapoff -a

關閉配置文件中的swap,防止重啓機器後自動開啓swapdocker

sed -i 's/^\([^#].*swap.*$\)/#\1/' /etc/fstab

2、安裝kubeadm、kubectl、kubelet

  1. 按照谷歌官方文檔進行安裝ubuntu

  2. 上一步99%的同窗都會由於被牆而安裝失敗,所以能夠按照這裏尋求阿里幫助api

  • 進入阿里旗下的OPSXctrl+f搜索kubernetes,找到後再點擊最後的幫助,能夠看到操做提示,這裏直接貼出來

    apt-get update && apt-get install -y apt-transport-https
    curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
    cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
    deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
    EOF  
    apt-get update
    # apt-get install -y kubelet kubeadm kubectl   #注意這一句先不要直接執行,見下文
  • 注意上面最後一句命令不要直接執行,由於直接安裝的話默認是最新版,而咱們須要指定安裝版本1.14.3,所以執行如下命令

    apt-get install -y kubelet=1.14.3-00 kubeadm=1.14.3-00 kubectl=1.14.3-00

    有些小夥伴不知道當前源有哪些版本,這裏可使用如下命令進行查詢,第二列就是版本信息

    root@k8s-001:/home# apt-cache madison kubeadm
    kubeadm |  1.15.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
    kubeadm |  1.14.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
    kubeadm |  1.14.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
    .
    .
    .

3、生成kubeadm的配置文件

在k8s-001上執行如下命令

cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.3
apiServerCertSANs:
- k8s-001
- 10.0.3.4
- myk8s.cluster
controlPlaneEndpoint: "10.0.3.4:6443"
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
  podSubnet: "10.244.0.0/16"
EOF

以上配置文件說明

  • kubernetesVersion:集羣版本,本次安裝爲v1.14.3
  • apiServerCertSANs:kubernetes集羣安裝時會自動生成證書,其中k8s-001,10.0.3.4兩項SANs,kubernetes在生成證書時會自帶,可是推薦多加一個域名(myk8s.cluster),以防止集羣IP變更後證書失效
  • controlPlaneEndpoint:控制平面端點,單控制平面時就是控制平面自己的IP:6443,在作高可用集羣時就須要虛擬的vip或者負載均衡ip,後面教你們搭建高可用集羣時能夠再詳細說明一下
  • imageRepository:這個配置就很重要了,默認狀況下kubernetes會去gcr.io拉取鏡像,因爲牆的緣由99%的同窗會失敗,所以再次替換爲阿里的倉庫
  • networking.podSubnet:pod的網段,不要和現有網段衝突便可

4、初始化control plane節點

在k8s-001上執行如下命令

kubeadm init --config kubeadm-config.yaml

以上命令說明

  • --config:指定配置文件,就是第三步生成的kubeadm-config.yaml

不出意外能夠看到成功信息

.
.
.
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following as root:

  kubeadm join 10.0.3.4:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  \
    --experimental-control-plane      

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.3.4:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

按照提示在k8s-001節點執行如下三條命令

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

咱們須要使用Kubectl客戶端鏈接kubernetes集羣,而且鏈接時是須要通過認證的,上面的三步操做就是將認證文件放到默認路徑$HOME/.kube/config下,供kubectl讀取

此時咱們就能夠執行kubectl命令查看集羣信息了

root@k8s-001:/home# kubectl get pod --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-d5947d4b-k69nd            0/1     Pending   0          4m34s
kube-system   coredns-d5947d4b-ll6hx            0/1     Pending   0          4m34s
kube-system   etcd-k8s-001                      1/1     Running   0          3m44s
kube-system   kube-apiserver-k8s-001            1/1     Running   0          3m47s
kube-system   kube-controller-manager-k8s-001   1/1     Running   0          4m1s
kube-system   kube-proxy-p9jgp                  1/1     Running   0          4m34s
kube-system   kube-scheduler-k8s-001            1/1     Running   0          3m48s

上面的結果顯示coredns pod處於Pending狀態,這是由於coredns須要依賴網絡插件,而如今網絡插件還未安裝

5、worker節點加入集羣

安裝第四步提示,在k8s-002,k8s-003節點分別執行如下命令

kubeadm join 10.0.3.4:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

須要注意的是加入集羣的命令中,token是有有效期的,通常爲2小時,超過2小時再使用此token就會加入失敗,這時須要從新生成加入命令,以下

root@k8s-001:~# kubeadm token create --print-join-command 
kubeadm join 10.0.3.4:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

成功信息以下

.
.
.
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

此時咱們能夠在k8s-001節點查看集羣信息了

root@k8s-001:/home# kubectl get node
NAME      STATUS     ROLES    AGE   VERSION
k8s-001   NotReady   master   13m   v1.14.3
k8s-002   NotReady   <none>   76s   v1.14.3
k8s-003   NotReady   <none>   56s   v1.14.3
root@k8s-001:/home/kubernetes/init# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE    IP         NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-d5947d4b-k69nd            0/1     Pending   0          14m    <none>     <none>    <none>           <none>
kube-system   coredns-d5947d4b-ll6hx            0/1     Pending   0          14m    <none>     <none>    <none>           <none>
kube-system   etcd-k8s-001                      1/1     Running   0          13m    10.0.3.4   k8s-001   <none>           <none>
kube-system   kube-apiserver-k8s-001            1/1     Running   0          13m    10.0.3.4   k8s-001   <none>           <none>
kube-system   kube-controller-manager-k8s-001   1/1     Running   0          13m    10.0.3.4   k8s-001   <none>           <none>
kube-system   kube-proxy-g4p5x                  1/1     Running   0          2m     10.0.3.5   k8s-002   <none>           <none>
kube-system   kube-proxy-p9jgp                  1/1     Running   0          14m    10.0.3.4   k8s-001   <none>           <none>
kube-system   kube-proxy-z9cpd                  1/1     Running   0          100s   10.0.3.6   k8s-003   <none>           <none>
kube-system   kube-scheduler-k8s-001            1/1     Running   0          13m    10.0.3.4   k8s-001   <none>           <none>

6、安裝calico網絡插件

網絡插件有不少,例如flannel等,你們能夠自行選用,這裏使用calico

root@k8s-001:/home# kubectl apply -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
etworking/1.7/calico.yaml
configmap/calico-config created
service/calico-typha created
deployment.apps/calico-typha created
poddisruptionbudget.policy/calico-typha created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created

網絡插件安裝完畢後,再次查看pod信息,就會看見coredns狀態已經時Running了

root@k8s-001:/home# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
kube-system   calico-node-gf6j4                 1/1     Running   0          39s     10.0.3.4      k8s-001   <none>           <none>
kube-system   calico-node-l4w9n                 1/1     Running   0          39s     10.0.3.5      k8s-002   <none>           <none>
kube-system   calico-node-rtcnl                 1/1     Running   0          39s     10.0.3.6      k8s-003   <none>           <none>
kube-system   coredns-d5947d4b-k69nd            1/1     Running   0          17m     10.244.0.10   k8s-001   <none>           <none>
kube-system   coredns-d5947d4b-ll6hx            1/1     Running   0          17m     10.244.1.6    k8s-002   <none>           <none>
kube-system   etcd-k8s-001                      1/1     Running   0          16m     10.0.3.4      k8s-001   <none>           <none>
kube-system   kube-apiserver-k8s-001            1/1     Running   0          16m     10.0.3.4      k8s-001   <none>           <none>
kube-system   kube-controller-manager-k8s-001   1/1     Running   0          16m     10.0.3.4      k8s-001   <none>           <none>
kube-system   kube-proxy-g4p5x                  1/1     Running   0          5m5s    10.0.3.5      k8s-002   <none>           <none>
kube-system   kube-proxy-p9jgp                  1/1     Running   0          17m     10.0.3.4      k8s-001   <none>           <none>
kube-system   kube-proxy-z9cpd                  1/1     Running   0          4m45s   10.0.3.6      k8s-003   <none>           <none>
kube-system   kube-scheduler-k8s-001            1/1     Running   0          16m     10.0.3.4      k8s-001   <none>           <none>

7、安裝ingress

咱們計劃將ingress啓動一個副本,而且部署在k8s-003節點上,使用k8s-003的host網絡(只啓動了一個副本,這是非高可用的作法,後續能夠講一講高可用該怎麼操做)

下載ingress配置文件,咱們須要對其進行修改

wget -O ingress.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

修改下載好的ingress.yaml

.
.
.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      nodeName: k8s-003    #新增,表示只能部署在k8s-003節點上
      hostNetwork: true    #新增,表示使用k8s-003的宿主機網絡
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
.
.
.

quay.io鏡像倉庫也有地方會被牆,或者下載速度很慢,所以可使用鏡像加速,替換image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0image: quay.mirrors.ustc.edu.cn/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1(這裏替換爲0.24.1版本是由於我在使用0.25.0版本時存在健康探測不經過的問題)

應用以上修改完畢的配置文件

root@k8s-001:/home# kubectl apply -f ingress.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

稍等一會再看下pod狀態

root@k8s-001:/home/kubernetes/init# kubectl get pod -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP         NODE      NOMINATED NODE   READINESS GATES
nginx-ingress-controller-6558b88448-mv7cz   1/1     Running   0          2m57s   10.0.3.6   k8s-003   <none>           <none>

8、測試一把

到此爲止集羣已經安裝完畢了,如今安裝一個簡單的nginx應用測試一把

生成nginx應用的配置文件nginx.yaml,定義了deployment、service、ingress三種資源

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.0
        ports:
        - containerPort: 80  

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  rules:
  - host: your.local.domain 
    http:
      paths:
      - backend:
          serviceName: nginx
          servicePort: 80
        path: /

執行如下命令部署

root@k8s-001:/home# kubectl apply -f nginx.yaml
deployment.apps/nginx created
service/nginx created
ingress.extensions/nginx created

查看pod狀態

root@k8s-001:/home# kubectl get pod 
NAME                    READY   STATUS    RESTARTS   AGE
nginx-8cc98cb56-knszf   1/1     Running   0          12s

使用curl命令測試

root@k8s-001:/home# curl -H "Host: your.local.domain" 10.0.3.6
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

大功告成!

相關文章
相關標籤/搜索