Kubernetes + Docker 集羣部署

 

 

環境準備:node

  master:192.168.1.118    系統:CentOS  7.3linux

  node1:192.168.1.155     系統:CentOS  7.3git

  node2:192.168.1.156  系統:CentOS  7.3github

  

  (1)藉助於NTP服務設定各節點時間精確同步;若節點可直接訪問互聯網,直接啓動chronyd系統服務,並設定隨系統引導而啓動docker

  (2)經過DNS完成各節點的主機名稱解析,測試環境主機數量較少時也可使用hosts文件進行;bootstrap

  (3)關閉各節點的iptables或firewalld服務,確保它們被禁止隨系統引導過程啓動;vim

  (4)各節點禁用SELinuxcentos

  (5)各節點禁用全部Swap設備;api

  (6)若要使用ipvs模型的proxy,各節點還須要載入ipvs相關的各模塊;網絡

 

[Master + Node基礎配置]

#解析主機名

  192.168.1.118    master

  192.168.1.155    node1

  192.168.1.156    node2

#部署集羣時,kubeadm默認會預先檢查當前主機是否禁用了Swap設備,並在未禁用時強制終止部署過程;所以,在主機內存資源充裕的條件下,須要禁用全部Swap設備,不然,就須要在後文kubeadm  init及kubeadm  join命令執行時額外使用相關的選項忽略檢查錯誤。

 

關閉Swap設備,首先關閉當前已啓用的全部Swap設備:

   ~]#swapoff -a

然後編輯/etc/fstab配置文件,註釋用於掛載Swap設備的全部行

 

#下載阿里雲docker-ce源

 ~]# cd /etc/yum.repos.d/

  wget  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 

#安裝docker和kubernetes服務

~]# yum install docker-ce -y

 

#若要經過默認的k8s.gcr.io鏡像殘酷獲取Kubernetes系統組件的相關鏡像,須要配置docker Unit File (/usr/lib/systemd/system/docker.service文件)中的Environment變量,爲其定義合用的HTTPS_PROXY

~]# vim /usr/lib/systemd/system/docker.service

  .......

   [Service]
  Type=notify
  # the default is not to use systemd for cgroups because the delegate issues still
  # exists and systemd currently does not support the cgroup feature set required
  # for containers run by docker
  Environment="HTTPS_PROXY=http://www.ik8s.io:10070"      
  Environment="NO_PROXY=127.0.0.0/8,192.168.1.0/24"         #本地IP訪問無需代理
  ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
  ExerStartPost=/usr/bin/iptables -P FORWARD ACCEPT           #docker自1.13版本起會自動設置iptables的FORWARD默認策略爲DROP,這可能會影響Kubernetes集羣依賴的報文轉發功能,所以須要在docker服務啓動後,從新將FORWARD鏈的默認策略設備爲ACCEPT,在「ExecStart=/usr/bin/dockerd」添加這一段
  ExecReload=/bin/kill -s HUP $MAINPID
  TimeoutSec=0
  RestartSec=2
  Restart=always

   .......

 

 ~]# systemctl daemon-reload          #重載配置文件
 ~]# systemctl start docker               #啓動docker

#跟橋接設置的兩個參數必須爲1

 ~]# sysctl  -a | grep bridge 

 ~]# vim /etc/sysctl.d/k8s.conf

  net.bridge.bridge-nf-call-ip6tables = 1

  net.bridge.bridge-nf-call-iptables = 1

 ~]# sysctl -p /etc/sysctl.d/k8s.conf         #讓系統重讀一下配置

 

#建立kubernetes阿里雲源

 ~]# cd /etc/yum.repos.d/

yum.repos.d]# vim kubernetes.repo

  [kubernetes]
  name=Kubernetes Repository
  baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  gpgcheck=1
  gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

 ~]# yum install kubelet kubectl kubeadm -y

 ~]# systemctl enable docker kubelet     #將docker和kubelet設置開機自啓動

 

#開始下載kubernetes組件的鏡像文件,若是無法直接鏈接到k8s官網則從國內下載鏡像文件而後打標籤便可

 ~]# rpm -qa|grep kubeadm      #下載鏡像前要先確認系統中kubeadm的版本,版本要匹配不然會報錯
kubeadm-1.13.4-0.x86_64

~]# kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.13.4
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.13.4
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.13.4
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.13.4
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.24
[config/images] Pulled k8s.gcr.io/coredns:1.2.6


 #確認鏡像文件是否下載完成

 

 

 

 

[Master 配置]

 #若未禁用Swap設備,則須要編輯kubelet的配置文件/etc/sysconfig/kubelet,設置忽略Swap啓用的狀態錯誤,內容以下:

~]# vim /etc/sysconfig/kubelet

  KUBELET_EXTRA_ARGS="--fail-swap-on=false"

#初始化kubernetes,10.244.0.0/16是flannel默認的網段因此咱們這裏不作更改,

~]# kubeadm init --kubernetes-version="v1.13.4" --pod-network-cidr="10.244.0.0/16" --ignore-preflight-errors=Swap

    .......   

  Your Kubernetes master has initialized successfully!

  To start using your cluster, you need to run the following as a regular user:

   mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   sudo chown $(id -u):$(id -g) $HOME/.kube/config

  You should now deploy a pod network to the cluster.
  Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/

  You can now join any number of machines by running the following on each node
  as root:

   kubeadm join 192.168.1.118:6443 --token gx1knl.wts9qo4ebghwk242 --discovery-token-ca-cert-hash sha256:bd7bb24b445dc95f0571c501bdc4e82aa23fdc8a7194a571790923b7d4b10468    #這段要記錄起來,用來給node加入到集羣中用

 

#確認6443端口是否已經啓用了

 

#要開始使用集羣,須要以常規用戶身份運行如下命令(爲了方便這裏就用root用戶執行了)

~]# mkdir -p $HOME/.kube   

~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

 

#開始安裝網絡插件flannel,Kubernetes版本大於1.7+的能夠直接執行下面這條命令下載

~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created configmap/kube-flannel-cfg created

daemonset.extensions/kube-flannel-ds-amd64 created

daemonset.extensions/kube-flannel-ds-arm64 created

daemonset.extensions/kube-flannel-ds-arm created

daemonset.extensions/kube-flannel-ds-ppc64le created

daemonset.extensions/kube-flannel-ds-s390x created

 

#flannel徹底下載完後,查看kube-system組件的運行狀態,全部狀態都顯示Running表示正常

 

 

 

[Node 配置]

 #忽略Swap錯誤

~]# vim /etc/sysconfig/kubelet

  KUBELET_EXTRA_ARGS="--fail-swap-on=false"

 

#兩個Node節點加入到kubernetes集羣中

~]# kubeadm join 192.168.1.118:6443 --token gx1knl.wts9qo4ebghwk242 --discovery-token-ca-cert-hash sha256:bd7bb24b445dc95f0571c501bdc4e82aa23fdc8a7194a571790923b7d4b10468 --ignore-preflight-errors=Swap

  [preflight] Running pre-flight checks
      [WARNING Swap]: running with swap on is not supported. Please disable swap
      [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
  [discovery] Trying to connect to API Server "192.168.1.118:6443"
  [discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.118:6443"
  [discovery] Requesting info from "https://192.168.1.118:6443" again to validate TLS against the pinned public key
  [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.118:6443"
  [discovery] Successfully established connection with API Server "192.168.1.118:6443"
  [join] Reading configuration from the cluster...
  [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
  [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  [kubelet-start] Activating the kubelet service
  [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
  [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "n2" as an annotation

  This node has joined the cluster:            #看到這個信息表示這個節點已經加入到集羣中了
  * Certificate signing request was sent to apiserver and a response was received.
  * The Kubelet was informed of the new secure connection details.

  Run 'kubectl get nodes' on the master to see this node join the cluster.

 

 ~]# mkdir -p $HOME/.kube

 

#全部 Node節點都加入到集羣之後,在Mstar主機上就能看到全部節點狀態已經爲Ready就緒狀態

 

 

#Master傳輸認證文件到Node節點,Node鏈接集羣必需要須要證書認證

~]# scp /etc/kubernetes/admin.conf root@192.168.1.155:/root/.kube/config

~]# scp /etc/kubernetes/admin.conf root@192.168.1.156:/root/.kube/config

 

Kubernets集羣搭建完畢

相關文章
相關標籤/搜索