k8s(1)-使用kubeadm安裝Kubernetes

安裝前準備

 

1. 一臺或多臺主機,這裏準備三臺機器node

角色linux

IPgit

Hostnamegithub

配置(最低)docker

操做系統版本bootstrap

主節點centos

192.168.0.10api

master安全

22Gbash

CentOS7.6.1810

工做節點

192.168.0.11

node1

22G

CentOS7.6.1810

工做節點

192.168.0.12

node2

22G

CentOS7.6.1810

 

2. 驗證MAC地址和product_uuid對於每一個節點都是惟一的

  • 您可使用命令ip link或ifconfig -a獲取網絡接口的MAC地址
  • 可使用 sudo cat /sys/class/dmi/id/product_uuid 查看product_uuid

 

3. 檢查所需端口

  • 主節點

 

  •  子節點

 

 4. 全部節點同步時間

yum install ntpdate -y
ntpdate 0.asia.pool.ntp.org

  

5. 修改主機名,分別在三個節點下執行

hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

  

  

6. 在三個節點下修改hosts文件

192.168.0.10 master
192.168.0.11 node1
192.168.0.12 node2

 

7. 關閉全部節點的防火牆及SELinux

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

  

8. CentOS7須要確保確保 net.bridge.bridge-nf-call-iptablessysctl配置中設置爲1,在全部節點執行

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system

  

9. 關閉SWAP

swapoff -a

 註釋掉/etc/fstab中的swap掛載那一行

 

安裝Docker CE

 

yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y
systemctl start docker
systemctl enable docker

  

 

安裝kubeadm,kubelet和kubectl

 

  • kubeadm:引導羣集的命令。
  • kubelet:在羣集中的全部計算機上運行的組件,並執行諸如啓動pod和容器之類的操做。
  • kubectl:用於與羣集通訊的命令行util。

 

 1. 編輯版本庫

在每一個節點執行

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 
        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

  

2. 安裝,並設置kubelet開機自啓動

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable kubelet && systemctl start kubelet

  

從安裝結果能夠看出還安裝了cri-tools, kubernetes-cni, socat三個依賴:
  官方從Kubernetes 1.9開始就將cni依賴升級到了0.6.0版本,在當前1.12中仍然是這個版本
  socat是kubelet的依賴
  cri-tools是CRI(Container Runtime Interface)容器運行時接口的命令行工具

 

建立集羣

 

1. 初始化
kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=192.168.0.10

  參數解釋註解:

    爲了flannel網絡正常工做,你必須經過--pod-network-cidr=10.244.0.0/16

               kubeadm使用與默認網關關聯的網絡接口來通告主IP,使用其餘網絡接口,請加上--apiserver-advertise-address=<ip-address>

初始化報錯,鏡像拉取錯誤,這是因爲國內的防火牆測緣由

error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

  解決方法以下,這些鏡像也應該下載到工做節點

 # 從dockerhub下載須要的鏡像
 docker pull mirrorgooglecontainers/kube-apiserver:v1.13.2
 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.2
 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.2
 docker pull mirrorgooglecontainers/kube-proxy:v1.13.2
 docker pull mirrorgooglecontainers/pause:3.1
 docker pull mirrorgooglecontainers/etcd:3.2.24
 docker pull coredns/coredns:1.2.6
 # 修改dockerhub鏡像tag爲k8s.gcr.io
 docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.2 k8s.gcr.io/kube-apiserver:v1.13.2
 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.2 k8s.gcr.io/kube-controller-manager:v1.13.2
 docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.2 k8s.gcr.io/kube-scheduler:v1.13.2
 docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.13.2 k8s.gcr.io/kube-proxy:v1.13.2
 docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
 docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24
 docker tag docker.io/coredns/coredns:1.2.6  k8s.gcr.io/coredns:1.2.6
 # 刪除多餘鏡像
 docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.2
 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.2
 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.2
 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.2
 docker rmi mirrorgooglecontainers/pause:3.1
 docker rmi mirrorgooglecontainers/etcd:3.2.24
 docker rmi coredns/coredns:1.2.6

  

  

再次執行初始化,成功,日誌以下,記錄了完成的初始化輸出的內容,根據輸出的內容基本上能夠看出手動初始化安裝一個Kubernetes集羣所須要的關鍵步驟。

[root@master ~]# kubeadm init \
>   --kubernetes-version=v1.13.0 \
>   --pod-network-cidr=10.244.0.0/16 \
>   --apiserver-advertise-address=192.168.0.10
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.0.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.0.10 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 30.506947 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: k9bohf.6zl3ovmlkf4iwudg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.10:6443 --token k9bohf.6zl3ovmlkf4iwudg --discovery-token-ca-cert-hash sha256:e73fe78ac9a961582a0ad81e3bffbebfa1d81b37b2f0cd7aacbfa6c1a1de351c

  

 要使kubectl爲非root用戶工做,請運行如下命令,這些命令也是kubeadm init輸出的一部分

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

  或者,若是您是root用戶,則能夠運行:

export KUBECONFIG=/etc/kubernetes/admin.conf

  

2. 安裝pod網絡附加組件

這裏選擇flannel做爲Pod的網絡,在管理節點下執行

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

  

  

安裝了pod網絡後,您能夠經過運行下面命令檢查CoreDNS pod是否正常工做

[root@master ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-fvrfv         1/1     Running   0          17m
kube-system   coredns-86c58d9df4-lgsxd         1/1     Running   0          17m
kube-system   etcd-master                      1/1     Running   0          16m
kube-system   kube-apiserver-master            1/1     Running   0          16m
kube-system   kube-controller-manager-master   1/1     Running   0          16m
kube-system   kube-flannel-ds-amd64-h56rl      1/1     Running   0          92s
kube-system   kube-proxy-99hmp                 1/1     Running   0          17m
kube-system   kube-scheduler-master            1/1     Running   0          16m

  若是報錯,下面的方法排錯:

      查看日誌:  /var/log/messages

      kubectl --namespace kube-system logs kube-flannel-ds-amd64-wdqsl

3.  加入工做節點

在工做節點執行初始化時日誌的輸出。

 kubeadm join 192.168.0.10:6443 --token qrduq2.24km2r56xg24o6yw --discovery-token-ca-cert-hash sha256:54c981ec4220202107b1ef89907d31af94840ed75a1e9f352f9288245760ac83

 

更新令牌

默認狀況下令牌24小時過時,使用下面方法建立新的令牌

kubeadm token create
kubeadm token list

  

 4. 查看節點的狀態

狀態爲Ready表示和Master節點正常通訊

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   104m   v1.13.2
node1    Ready    <none>   34m    v1.13.2
node2    Ready    <none>   29m    v1.13.2
 5. 將主節點加入工做負載(可選操做)

默認狀況下,出於安全緣由,您的羣集不會在主服務器上安排pod。若是您但願可以在主服務器上安排pod,例如,對於用於開發的單機Kubernetes集羣,請運行:

[root@master ~]# kubectl describe node master | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@master ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/master untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found

  

6. 從主服務器之外的計算機控制您的羣集(可選操做)

在工做節點上運行

scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

  

移除集羣

 

在控制節點上運行

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

 而後,在要刪除的節點上,重置全部kubeadm安裝狀態:

kubeadm reset

  重置過程不會重置或清除iptables規則或IPVS表。若是您想重置iptables,必須手動執行:

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

  若是要重置IPVS表,則必須運行如下命令:

ipvsadm -C

  

 

參考文檔:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

相關文章
相關標籤/搜索