Kubeadm部署基於Calico K8s集羣

1 環境node

      Host Name                     Role                                     IP                    
        master1            k8s-1001         172.31.135.239
        node1            k8s-1002         172.31.135.238
        node2            k8s-1003        172.31.135.237

 2 內核調優linux

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

3 修改文件描述符docker

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf


hard limits自AIX 4.1版本開始引入。hard limits 應由AIX系統管理員設置,只有security組的成員能夠將此值增大,用戶自己能夠減少此限定值,可是其更改將隨着該用戶從系統退出而失效

soft limits 是AIX核心使用的限制進程對系統資源的使用的上限值。此值可由任何人更改,但不能超出hard limits值。這裏要注意的是隻有security組的成員可以使更改永久生效普通用戶的更改在其退出系統後將失效

1)soft nofile和hard nofile示,單個用用戶的軟限制爲1000,硬限制爲1200,即表示單用戶能打開的最大文件數量爲1000,無論它開啓多少個shell。

2)soft nproc和hard nproc 單個用戶可用的最大進程數量,軟限制和硬限制

3)memlock 一個任務鎖住的物理內存的最大值(這裏設置成無限制)

4  配置k8s yum 源shell

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5 配置docker yum源vim

cd /etc/yum.repos.d
wget https://download.docker.com/linux/centos/docker-ce.repo

6 時間同步 centos

一個集羣內的時間同步必不可少api

systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service

ntpdate -u ntp.api.bz

7  關閉SELinux、防火牆bash

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

8 關閉系統的Swap網絡

swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

9 配置hosts 解析app

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.31.135.239	k8s-1001	 k8s-1001
172.31.135.237  k8s-1003     k8s-1003 
172.31.135.238  k8s-1002     k8s-1002

10 配置節點免密登陸

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub  用戶名字@192.168.x.xxx

11 安裝依賴等

yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl lrzsz wget

12 配置ipvs模塊

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4




yum install ipset ipvsadm -y

13 安裝docker 

yum list docker-ce --showduplicates | sort -r 

yum install -y docker-ce-18.06.1.ce-3.el7

最號先不要安裝最新版本
systemctl daemon-reload
systemctl enable docker
systemctl start docker

14 master 和node 節點安裝 kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl  master node
systemctl enable kubelet
暫不啓動 kubelet

15 若是在國內請按照下面步驟進行

生成默認配置 

kubeadm config print init-defaults > /root/kubeadm.conf

修改 /root/kubeadm.conf,使用國內阿里的imageRepository: registry.aliyuncs.com/google_containers

下載鏡像

kubeadm config images pull --config /root/kubeadm.conf

16 如過網絡容許能夠直接初始化集羣

kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16


這裏沒有設置--service-cidr 是由於咱們下面須要部署calico網絡,calico會幫助咱們設置service網絡,若是此地設置了service網絡會致使calico部署不成功

保存這段內容

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.135.239:6443 --token ljzfdh.5qccrqv482klk96h \
    --discovery-token-ca-cert-hash sha256:dc65895e08a9c0f531943940b44f6ef144dd3a7e5f76973758927a6e107281a1

16 建立相關文件夾

mkdir -p $HOME/.kube
 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

18 若是之後還有機器加入集羣如何獲取token 和 hash值?

得到token和hash值
1)獲取token
kubeadm token list
默認狀況下 Token 過時是時間是24小時,若是 Token 過時之後,能夠輸入如下命令,生成新的 Token
kubeadm token create

2)獲取hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

19 驗證

kubectl get pods --all-namespaces

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-pdf9r            0/1     Pending   0          6m50s
kube-system   coredns-fb8b8dccf-rngcz            0/1     Pending   0          6m50s
kube-system   etcd-k8s-1001                      1/1     Running   0          5m52s
kube-system   kube-apiserver-k8s-1001            1/1     Running   0          6m4s
kube-system   kube-controller-manager-k8s-1001   1/1     Running   0          5m51s
kube-system   kube-proxy-b8dhg                   1/1     Running   0          6m50s
kube-system   kube-scheduler-k8s-1001            1/1     Running   0          5m47s

coredns 是pending狀態 先不用管它,由於這個沒有網絡插件的致使的

20 部署calico網絡插件

官方文檔

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/


kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

咱們須要修改calico文件

https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

vim calico.yaml

1)修改ipip模式關閉 和typha_service_name

- name: CALICO_IPV4POOL_IPIP
value: "off"


typha_service_name: "calico-typha"




calico網絡,默認是ipip模式(在每臺node主機建立一個tunl0網口,這個隧道連接全部的node容器網絡,官網推薦不一樣的ip網段適合,好比aws的不一樣區域主機),

修改爲BGP模式,它會以daemonset方式安裝在全部node主機,每臺主機啓動一個bird(BGP client),它會將calico網絡內的全部node分配的ip段告知集羣內的主機,並經過本機的網卡eth0或者ens33轉發數據;

2)修改replicas

  replicas: 1
  revisionHistoryLimit: 2

3)修改pod的網段CALICO_IPV4POOL_CIDR

- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
4)若是手動下載鏡像請查看calico.yaml 文件裏面標註的鏡像版本 不然能夠直接執行會自動下載
5)部署calico
kubectl apply -f calico.yaml

6)查看
kubectl get po --all-namespaces
此時你會發現是pending狀態是由於node節點尚未相關組件
7) 驗證是否爲bgp模式
# ip route show
default via 172.31.143.253 dev eth0 
blackhole 10.244.0.0/24 proto bird 
10.244.0.2 dev caliac6de7553e8 scope link 
10.244.0.3 dev cali1591fcccf0f scope link 
10.244.1.0/24 via 172.31.135.237 dev eth0 proto bird 
10.244.2.0/24 via 172.31.135.238 dev eth0 proto bird 
169.254.0.0/16 dev eth0 scope link metric 1002 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.31.128.0/20 dev eth0 proto kernel scope link src 172.31.135.239

21 將node加入到節點

kubeadm join 172.31.135.239:6443 --token ljzfdh.5qccrqv482klk96h \
    --discovery-token-ca-cert-hash sha256:dc65895e08a9c0f531943940b44f6ef144dd3a7e5f76973758927a6e107281a1 


master上查看集羣
[root@k8s-1001 ~]# kubectl get node
NAME       STATUS   ROLES    AGE    VERSION
k8s-1001   Ready    master   37m    v1.14.1
k8s-1002   Ready    <none>   99s    v1.14.1
k8s-1003   Ready    <none>   115s   v1.14.1
[root@k8s-1001 ~]# kubectl get node -o wide
NAME       STATUS   ROLES    AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-1001   Ready    master   37m    v1.14.1   172.31.135.239   <none>        CentOS Linux 7 (Core)   3.10.0-862.14.4.el7.x86_64   docker://18.6.1
k8s-1002   Ready    <none>   103s   v1.14.1   172.31.135.238   <none>        CentOS Linux 7 (Core)   3.10.0-862.14.4.el7.x86_64   docker://18.6.1
k8s-1003   Ready    <none>   119s   v1.14.1   172.31.135.237   <none>        CentOS Linux 7 (Core)   3.10.0-862.14.4.el7.x86_64   docker://18.6.1

[root@k8s-1001 ~]# kubectl get pod -o wide -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
calico-node-8z92v                  2/2     Running   0          2m1s    172.31.135.238   k8s-1002   <none>           <none>
calico-node-k542k                  2/2     Running   0          7m32s   172.31.135.239   k8s-1001   <none>           <none>
calico-node-n4jgf                  2/2     Running   0          2m17s   172.31.135.237   k8s-1003   <none>           <none>
calico-typha-55968bfd7b-c5r4z      1/1     Running   0          7m33s   172.31.135.237   k8s-1003   <none>           <none>
coredns-fb8b8dccf-pdf9r            1/1     Running   0          37m     10.244.0.3       k8s-1001   <none>           <none>
coredns-fb8b8dccf-rngcz            1/1     Running   0          37m     10.244.0.2       k8s-1001   <none>           <none>
etcd-k8s-1001                      1/1     Running   0          36m     172.31.135.239   k8s-1001   <none>           <none>
kube-apiserver-k8s-1001            1/1     Running   0          36m     172.31.135.239   k8s-1001   <none>           <none>
kube-controller-manager-k8s-1001   1/1     Running   0          36m     172.31.135.239   k8s-1001   <none>           <none>
kube-proxy-b8dhg                   1/1     Running   0          37m     172.31.135.239   k8s-1001   <none>           <none>
kube-proxy-nvlmz                   1/1     Running   0          2m17s   172.31.135.237   k8s-1003   <none>           <none>
kube-proxy-rfb77                   1/1     Running   0          2m1s    172.31.135.238   k8s-1002   <none>           <none>
kube-scheduler-k8s-1001            1/1     Running   0          36m     172.31.135.239   k8s-1001   <none>           <none>

22 開啓kube-proxy  lvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,`mode: "ipvs"`:

kubectl edit cm kube-proxy -n kube-system
 kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.31.135.239:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0         
  -> 10.244.0.3:9153              Masq    1      0          0         
TCP  10.111.3.127:5473 rr
  -> 172.31.135.237:5473          Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0

23 master默認是污點,pod是不會調度到master

1)開放master可被調度
kubectl taint node k8s-1001 node-role.kubernetes.io/master-
2)
若是要恢復 Master Only 狀態
kubectl taint node k8s-1001 node-role.kubernetes.io/master="":NoSchedule

24 建立一個測試pod 驗證集羣

kubectl run net-test --image=alpine --replicas=2 sleep 360

# kubectl get pod -o wide 
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
net-test-7d6d58cc8-78wd6   1/1     Running   0          6s    10.244.1.2   k8s-1003   <none>           <none>
net-test-7d6d58cc8-hjdhw   1/1     Running   0          6s    10.244.2.2   k8s-1002   <none>           <none>
相關文章
相關標籤/搜索