[原]使用kubeadm部署kubernetes(一)

#######################    如下爲聲明  #####################html

在公衆號  木子李的菜田node

輸入關鍵詞:   k8s        linux

有系列安裝文檔git

此文檔是以前作筆記在兩臺機上進行的實踐,kubernetes處於不斷開發階段github

不能保證每一個步驟都能準確到同步開發進度,因此若是安裝部署過程當中有問題請儘可能googledocker

本文章分爲兩部分:json

[原]使用kubeadm部署kubernetes(一)bootstrap

[原]部署kubernetes dashboard(二)vim

按照下面步驟能獲得什麼?centos

1.兩臺主機:一臺爲server ,另一臺爲node節點

2.在node節點上安裝部署dashboard插件 並以kubernetes dashboard的方式呈現

3.解決遇到的問題

#######################    如下爲正文  #####################
###
OS: CentOS 7.5
kubernetes :Kubernetes v1.14.1
network model: flannel
###
【在兩臺機器上都要作的步驟以下】
 
0.修改添加node的hosts文件和中止防火牆
    
systemctl stop firewalld && systemctl disable firewalld
1.免密雙方登陸
2.時間同步   (可參見 http://www.javashuo.com/article/p-wgyqoufy-db.html
3.關閉selinux
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
4.打開數據包轉發
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

  

5.加載網絡篩選器
modprobe br_netfilter
lsmod | grep br_netfilter
6.檢查橋工具包(若是沒有須要從新安裝)
rpm -qa |grep bridge-utils
7關閉swap
swapoff -a

 

vim /etc/fstab
在這行前面添加#
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

  

8(可選項,能夠不選擇執行,只做爲參考,則使用的是iptables模式).kube-proxy開啓ipvs的前置條件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
【注意】上面腳本建立了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啓後能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。
接下來還須要確保各個節點上已經安裝了ipset軟件包yum install ipset。 爲了便於查看ipvs的代理規則,最好安裝一下管理工具ipvsadm yum install ipvsadm 若是以上前提條件若是不知足,
則即便kube-proxy的配置開啓了ipvs模式,也會退回到iptables模式

  

 
9.安裝docker
Kubernetes從1.6開始使用CRI(Container Runtime Interface)容器運行時接口。默認的容器運行時仍然是Docker,使用的是kubelet中內置dockershim CRI實現
安裝docker的yum源
yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast

 或者使用 

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

【注意】  

在後面kubeadm進行init的時候會報個相似這樣的錯誤:
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
經過查看k8s更改記錄能夠查到支持的docker版本號
指定特定的docker版本號進行安裝
  • kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version.
若是以前有錯誤的docker須要先卸載: yum remove docker-ce* -y
列出可安裝的docker版本
yum list docker-ce.x86_64  --showduplicates |sort -r
docker-ce.x86_64            3:18.09.0-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.3.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.2.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                   docker-ce-stable
 
yum install docker-ce-18.06.3.ce-3.el7 -y
 
systemctl start docker && systemctl enable docker  
 
編輯或者建立/etc/docker/daemon.json  更改設置鏡像庫
[root@k8s-master flannel]# cat <<EOF  >/etc/docker/daemon.json
{
"registry-mirrors": ["https://72idtxd8.mirror.aliyuncs.com"]
}
EOF
 
systemctl reset-failed docker.service && systemctl restart docker.service
 
10安裝[kubernetes]
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
能kexue上網的用下面這個
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
sudo yum makecache fast
yum install -y kubelet kubeadm kubectl 
官網對三個工具的介紹
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.
11 【只在master機器上執行】檢查cgroup driver:
docker info | grep -i cgroup
Cgroup Driver: cgroupfs
若是不是cgroupfs 就用下面的命令改變:
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet    ---》這句不執行會報錯以下:
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file
"/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory  
須要在成功啓動初始化cluster的時候才能執行成功  因此須要先執行 kubeadm --init
%%%%%  下面是官方文檔參考


When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/kubeadm-flags.env file during runtime.
If you are using a different CRI, you have to modify the file /etc/default/kubelet with your cgroup-driver value, like so:

KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
This file will be used by kubeadm init and kubeadm join to source extra user defined arguments for the kubelet.
Please mind, that you only have to do that if the cgroup driver of your CRI is not cgroupfs, because that is the default value in the kubelet already.
Restarting the kubelet is required:


systemctl daemon-reload
systemctl restart kubelet

12【這步在每一個node上(包括master)都要進行操做】

剛纔安裝的是k8s軟件,如今來處理k8s須要使用的鏡像【可能在kubelet init的時候會遇到以下的錯誤】:

  Failed to pull image "quay.io/coreos/flannel:v0.11.0-amd64" 等等

爲了能解決這些問題,能夠提早處理好鏡像問題,使用kubeadm config images list 查看須要處理的基礎鏡像問題,

之因此爲基礎鏡像是由於在kubelet init的時候就須要用到的鏡像,後面還會有其餘插件安裝時候須要的鏡像,當遇到

問題時再看看是什麼鏡像須要存在。

[root@k8s-master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.4
k8s.gcr.io/kube-controller-manager:v1.13.4
k8s.gcr.io/kube-scheduler:v1.13.4
k8s.gcr.io/kube-proxy:v1.13.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6  

根據上面查出來的鏡像名稱,我寫了個腳原本處理:

編輯拉取腳本:vim pull_image.sh
#!/bin/bash
#### 基礎鏡像 ##### images=( kube-apiserver:v1.14.1 kube-controller-manager:v1.14.1 kube-scheduler:v1.14.1 kube-proxy:v1.14.1 pause:3.1 etcd:3.3.10 coredns:1.3.1 kubernetes-dashboard-amd64:v1.10.1 ) for imageName in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done ### 插件鏡像 network: flannel image ### docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
### 運行時插件鏡像pause image ### docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
chmod a+x  pull_image.sh
./pull_image.sh
docker images

13【這步驟在master機器上執行】

 

 ~]# kubelet --version
Kubernetes v1.14.1  

 初始化kubernetes master 機器:

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.166 --kubernetes-version=v1.14.1  > kube_init.log
10.244.0.0/16  是使用flannel網絡要設置的ip網絡地址(官網介紹必定要使用這個)
【官網介紹】
For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.50.10.10 --kubernetes-version=v1.13.4
192.168.0.166  爲maser的主機ip地址
v1.14.1  是上面查出來的kubernetes 版本號

下面是個人操做記錄:僅供參考:

 

[root@rancher ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.166 --kubernetes-version=v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [rancher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.166] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [rancher localhost] and IPs [192.168.0.166 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [rancher localhost] and IPs [192.168.0.166 127.0.0.1 ::1] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 15.505486 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node rancher as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node rancher as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: dije5w.ipijm49d8c9isxie [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.166:6443 --token dije5w.ipijm49d8c9isxie \ --discovery-token-ca-cert-hash sha256:c1aaaafc79d85141e60a73c43562e6b06cb8d9cdc24cc0a649c8d6e0f24c5f42

  

14【在master上執行下面的操做】

如下完整步驟內容請在公衆號: 木子李的菜田 輸入: k8s 

爲了能再node上正常操做kubelet,須要將master上的admin配置文件保存到每一個node節點上

。。。。。  

【在node上執行下面操做】

。。。。。

  下面是很是很重要的一步!!!!

。。。。。。

【注意注意】若是在第一次kubeadm init的時候失敗了,須要從新kubeadm 進行從新初始化,須要作下面的操做:

 

。。。。。。

檢查kubernetes是否安裝正確:

[root@master ~]# kubectl get pods --namespace=kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-srtht         0/1     Pending   0          3h
coredns-86c58d9df4-tl7ww         0/1     Pending   0          3h
etcd-master                      1/1     Running   0          179m
kube-apiserver-master            1/1     Running   0          179m
kube-controller-manager-master   1/1     Running   0          3h
kube-proxy-2sdmn                 1/1     Running   1          3h
kube-proxy-ln5tk                 1/1     Running   1          173m
kube-scheduler-master            1/1     Running   0          3h

 

發現不少都還不正確,這個時候先別急,繼續第15和16步

 15 【只在master上進行操做】配置CNI-----這步跟網絡息息相關

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
添加下面的內容
。。。。。。

  

16安裝flannel

#]kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

  

 17最後在master和node上檢查:

所有是Running 纔算是真正的正確部署

[root@master ~]# kubectl get pods --namespace=kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-srtht         1/1     Running   0          3h16m
coredns-86c58d9df4-tl7ww         1/1     Running   0          3h16m
etcd-master                      1/1     Running   1          3h15m
kube-apiserver-master            1/1     Running   1          3h15m
kube-controller-manager-master   1/1     Running   1          3h16m
kube-flannel-ds-amd64-7mntm      1/1     Running   0          12m
kube-flannel-ds-amd64-bxzdn      1/1     Running   0          12m
kube-proxy-2sdmn                 1/1     Running   1          3h16m
kube-proxy-ln5tk                 1/1     Running   1          3h9m
kube-scheduler-master            1/1     Running   1          3h16m

 End

文章中有些地方的坑已經經過相關步驟填了,若是還遇到其餘問題 請google一下或者使用 

使用describe 查看緣由   特別是要注意這個緣由是在master仍是在node節點上。

相關文章
相關標籤/搜索