kubeadm初始化kubernetes集羣

 

 

有兩種方式安裝集羣:
  一、手動安裝各個節點的各個組件,安裝極其複雜困難。
  二、使用工具:kubeadmnode

 

kubeadm 是官方提供的專門部署集羣的管理工具。linux

  1. 在kubeadm下每一個節點都須要安裝docker,包括master節點也必須安裝docker
  2. 每一個節點,包括master節點都必須安裝kubelet
  3. API Server, Scheduler(調度器), Controller-Manager(控制器),etcd等以容器的方式跑在kubelet之上。也就是說連K8S本身的組件都直接運行在pod裏。其它node節點也以容器的方式運行kube-proxy在Pod裏。
  4. flannel網絡插件也是須要運行在Pod裏的。git

 

kubernetes和docker的下載地址和配置YUM文件:github

阿里雲下載地址:https://mirrors.aliyun.comdocker

yum安裝K8S的路徑是 https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/vim


dock安裝的路徑: wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repocentos

[root@master yum.repos.d]# cat kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1api

各node節點也須要安裝yum配置文件bash

配置好yum源之後,就能夠正式安裝組件了。服務器

 

在各個節點上安裝ipvsadm
yum -y install ipvsadm

第一步:安裝各組件。

[root@master yum.repos.d]# yum install docker-ce kubelet kubeadm kubectl

 

第二步:初始化主節點
[root@master yum.repos.d]# vim /usr/lib/systemd/system/docker.service
Enviironment="HTTPS_PROXY=http://www.ik8s.io:10080"

能夠使用docker info 查看docker的配置信息。

必須保證下面的結果是1,不能是0。

cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1

cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

若是是0的話,就須要按照下面的解決

vim /etc/sysctl.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1

sysctl -p

 

查看組件是否安裝成功:

[root@master yum.repos.d]# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service

每一個節點都不能打開swap設備,早期的時候,K8S是禁止swap的,一開swap就不能安裝和啓動。能夠在下面的參數設置忽略swap
[root@master yum.repos.d]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=

改爲下面的參數
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
KUBE_PROXY_MODE=ipvs

kube-proxy開啓ipvs的前置條件
因爲ipvs已經加入到了內核的主幹,因此爲kube-proxy開啓ipvs的前提須要加載如下的內核模塊:
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在全部的Kubernetes節點node1和node2上執行如下腳本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

腳本建立了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啓後能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。

初始化集羣:

[root@master yum.repos.d]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU   //由於這裏我用的是虛擬機練習的,只是一核CPU。因此也就選擇忽略CPU了,生產服務器上不用考慮這個問題。

這裏有可能會報錯,由於無法拉取國外的鏡像,因此會報錯,先用docker把全部的鏡像拉取下來以後再初始化。也能夠選擇國內的鏡像網站。

拉取鏡像腳本:

[root@master ~]# cat dockerpull.sh
#!/bin/bash
K8S_VERSION=v1.14.1
ETCD_VERSION=3.3.10
#DASHBOARD_VERSION=v1.8.3
#FLANNEL_VERSION=v0.10.0-amd64
DNS_VERSION=1.3.1
PAUSE_VERSION=3.1
# 基本組件
docker pull mirrorgooglecontainers/kube-apiserver-amd64:$K8S_VERSION
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:$K8S_VERSION
docker pull mirrorgooglecontainers/kube-scheduler-amd64:$K8S_VERSION
docker pull mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION
docker pull mirrorgooglecontainers/etcd-amd64:$ETCD_VERSION
docker pull mirrorgooglecontainers/pause:$PAUSE_VERSION
docker pull coredns/coredns:$DNS_VERSION

# 網絡組件
#docker pull quay.io/coreos/flannel:$FLANNEL_VERSION

# 修改tag
docker tag mirrorgooglecontainers/kube-apiserver-amd64:$K8S_VERSION k8s.gcr.io/kube-apiserver:$K8S_VERSION
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:$K8S_VERSION k8s.gcr.io/kube-controller-manager:$K8S_VERSION
docker tag mirrorgooglecontainers/kube-scheduler-amd64:$K8S_VERSION k8s.gcr.io/kube-scheduler:$K8S_VERSION
docker tag mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION k8s.gcr.io/kube-proxy:$K8S_VERSION
docker tag mirrorgooglecontainers/etcd-amd64:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
docker tag mirrorgooglecontainers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag coredns/coredns:$DNS_VERSION k8s.gcr.io/coredns:$DNS_VERSION

#刪除冗餘的images
docker rmi mirrorgooglecontainers/kube-apiserver-amd64:$K8S_VERSION
docker rmi mirrorgooglecontainers/kube-controller-manager-amd64:$K8S_VERSION
docker rmi mirrorgooglecontainers/kube-scheduler-amd64:$K8S_VERSION
docker rmi mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION
docker rmi mirrorgooglecontainers/etcd-amd64:$ETCD_VERSION
docker rmi mirrorgooglecontainers/pause:$PAUSE_VERSION
docker rmi coredns/coredns:$DNS_VERSION

這時候再初始化:

[root@master yum.repos.d]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.163.100:6443 --token isl76u.fwmreocpbovdv5xh \
--discovery-token-ca-cert-hash sha256:252ef008db8fee83cc0f652424bb9bbf0d66934595de0cd170ebb68a1f2d8d84

 

這時須要讓你用一個普通用戶執行這些命令,更改kube的屬組等。
在其它節點上以root用戶把節點加入到集羣中,藍色部分的命令。是一個密鑰。防止別人加入這個集羣。

DNS插件如今已經更新到3版了,初版叫作sigai(英文不知道是哪一個,就寫的拼音) DNS 。 第二版叫作kubeDNS,在k8s1.11版本開始叫作CoreDNS

第三版本支持動態加載等等一些功能了。早期版本都是不支持的。

 

這裏是練習,因此就直接用root了

[root@master ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:10248 *:*
LISTEN 0 128 127.0.0.1:10249 *:*
LISTEN 0 128 192.168.163.100:2379 *:*
LISTEN 0 128 127.0.0.1:2379 *:*
LISTEN 0 128 192.168.163.100:2380 *:*
LISTEN 0 128 127.0.0.1:10257 *:*
LISTEN 0 128 127.0.0.1:10259 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 127.0.0.1:35544 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 :::10250 :::*
LISTEN 0 128 :::10251 :::*
LISTEN 0 128 :::6443 :::*
LISTEN 0 128 :::10252 :::*
LISTEN 0 128 :::10256 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*

這裏發現6443端口已經開始監控了。其它節點能夠加入了。

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

這時候就能夠使用kubectl命令了。

 

[root@master ~]# kubectl get cs (cs是簡寫 componentstatus) 檢查組件狀態信息
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}

[root@master ~]# kubectl get nodes (獲取集羣節點信息)
NAME STATUS ROLES AGE VERSION
master NotReady master 43h v1.14.1

NotReady 未就緒狀態,由於還缺網絡組件

flannel的地址:https://github.com/coreos/flannel

For Kubernetes v1.7+  kubenetes1.7版本以上的話直接執行下面的命令

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

可是這些命令是不夠的。

[root@master ~]# kubectl get pods
No resources found.

直接查看到全部鏡像被拉下來部署成功纔算成功。

docker image ls
會發現flannel已經拉取下來了

 

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 43h v1.14.1

[root@master ~]# kubectl get pods
No resources found.

[root@master ~]# kubectl get pods -n kube-system 指定了命名空間
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-8lczd 1/1 Running 0 43h
coredns-fb8b8dccf-rljmp 1/1 Running 0 43h
etcd-master 1/1 Running 2 43h
kube-apiserver-master 1/1 Running 3 43h
kube-controller-manager-master 1/1 Running 3 43h
kube-flannel-ds-amd64-mj4s6 1/1 Running 0 3m17s
kube-proxy-lwntd 1/1 Running 2 43h
kube-scheduler-master 1/1 Running 3 43h

[root@master ~]# kubectl get ns 查看都有哪些namespace
NAME STATUS AGE
default Active 43h
kube-node-lease Active 43h
kube-public Active 43h
kube-system Active 43h

系統級的POD都在kube-system命名空間當中。

 

這時就須要node節點加入到集羣當中了。

[root@master ~]# scp /usr/lib/systemd/system/docker.service node1:/usr/lib/systemd/system/docker.service

[root@master ~]# scp /etc/sysconfig/kubelet node1:/etc/sysconfig/

node2也須要拷貝一下

須要把node1和node2節點的swap關閉掉,還要設置成開機自動啓動kubelet
[root@node1 ~]# systemctl enable docker kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node1 ~]# swapoff -a
[root@node1 ~]# sed -i 's/.\\*swap.\\*/#&/' /etc/fstab

node1和node2都須要執行。

[root@node1 ~]# kubeadm join 192.168.163.100:6443 --token braag1.mmx3cektd73oo1yd --discovery-token-ca-cert-hash sha256:252ef008db8fee83cc0f652424bb9bbf0d66934595de0cd170ebb68a1f2d8d84 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU


[root@node2 ~]# kubeadm join 192.168.163.100:6443 --token braag1.mmx3cektd73oo1yd --discovery-token-ca-cert-hash 252ef008db8fee83cc0f652424bb9bbf0d66934595de0cd170ebb68a1f2d8d84 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU

若是忘記了token和hash的話
[root@master ~]# kubeadm token create
braag1.mmx3cektd73oo1yd
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
252ef008db8fee83cc0f652424bb9bbf0d66934595de0cd170ebb68a1f2d8d84

記的hash碼前面加上sha256:

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 45h v1.14.1
node1 Ready <none> 2m8s v1.14.1
node2 NotReady <none> 64s v1.14.1

至此,node節點已經正式加入到集羣裏了。

node節點下載鏡像的腳本

[root@node2 ~]# cat dockerpull.sh
#!/bin/bash
K8S_VERSION=v1.14.1
FLANNEL_VERSION=v0.11.0-amd64
PAUSE_VERSION=3.1
# 基本組件
docker pull mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION
docker tag mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION k8s.gcr.io/kube-proxy:$K8S_VERSION
docker pull mirrorgooglecontainers/pause:$PAUSE_VERSION
docker tag mirrorgooglecontainers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
# 網絡組件
docker pull quay.io/coreos/flannel:$FLANNEL_VERSION
#刪除冗餘的images
#docker rmi mirrorgooglecontainers/kube-proxy-amd64:$K8S_VERSION
#docker rmi mirrorgooglecontainers/pause:$PAUSE_VERSION

 


[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-8lczd 1/1 Running 0 47h
coredns-fb8b8dccf-rljmp 1/1 Running 0 47h
etcd-master 1/1 Running 2 47h
kube-apiserver-master 1/1 Running 3 46h
kube-controller-manager-master 1/1 Running 6 47h
kube-flannel-ds-amd64-26kk7 1/1 Running 0 85m
kube-flannel-ds-amd64-428x9 1/1 Running 0 86m
kube-flannel-ds-amd64-mj4s6 1/1 Running 0 3h26m
kube-proxy-5s2gz 1/1 Running 0 86m
kube-proxy-lwntd 1/1 Running 2 47h
kube-proxy-tjcpd 1/1 Running 0 85m
kube-scheduler-master 1/1 Running 5 47h

[root@master ~]# kubectl get pods -n kube-system -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScoredns-fb8b8dccf-8lczd 1/1 Running 0 47h 10.244.0.2 master <none> <none>coredns-fb8b8dccf-rljmp 1/1 Running 0 47h 10.244.0.3 master <none> <none>etcd-master 1/1 Running 2 47h 192.168.163.100 master <none> <none>kube-apiserver-master 1/1 Running 3 46h 192.168.163.100 master <none> <none>kube-controller-manager-master 1/1 Running 6 47h 192.168.163.100 master <none> <none>kube-flannel-ds-amd64-26kk7 1/1 Running 0 85m 192.168.163.102 node2 <none> <none>kube-flannel-ds-amd64-428x9 1/1 Running 0 86m 192.168.163.101 node1 <none> <none>kube-flannel-ds-amd64-mj4s6 1/1 Running 0 3h26m 192.168.163.100 master <none> <none>kube-proxy-5s2gz 1/1 Running 0 86m 192.168.163.101 node1 <none> <none>kube-proxy-lwntd 1/1 Running 2 47h 192.168.163.100 master <none> <none>kube-proxy-tjcpd 1/1 Running 0 85m 192.168.163.102 node2 <none> <none>kube-scheduler-master 1/1 Running 5 47h 192.168.163.100 master <none> <none>

相關文章
相關標籤/搜索