https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ [root@node1 ~]# cd /etc/yum.repos.d/ [root@node1 yum.repos.d]# ls CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo k8s.repo CentOS-CR.repo CentOS-Media.repo epel.repo CentOS-Debuginfo.repo CentOS-Sources.repo epel-testing.repo [root@node1 yum.repos.d]# vi k8s.repo [kubernetes] name=kubernetes repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 enabled=1
[root@node1 yum.repos.d]# yum install kubelet kubeadm kubectl docker-cenode
導入docker k8s 鏡像 這些鏡像默認在google是沒法訪問到的 須要從dockerhub官方網站上把須要的鏡像先下載到本地 而後修改鏡像的tag 再執行kubeadm init python
kubeadm首先會查看本機是否存在相應的docker鏡像 若是有就直接啓動本機的鏡像若是沒有就會從https://k8s.gcr.io倉庫中下載相關鏡像linux
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.12.1: output: Trying to pull repository k8s.gcr.io/kube-apiserver ... Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.1: output: Trying to pull repository k8s.gcr.io/kube-controller-manager ... Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.12.1: output: Trying to pull repository k8s.gcr.io/kube-scheduler ... Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.12.1: output: Trying to pull repository k8s.gcr.io/kube-proxy ... Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Trying to pull repository k8s.gcr.io/pause ... Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Trying to pull repository k8s.gcr.io/etcd ... Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection refused , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.2: output: Trying to pull repository k8s.gcr.io/coredns ... Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection refused , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
上面的提示信息中包含所須要的鏡像的名稱和tag 根據提示所需的名稱和tag來從dockerhub中下載鏡像到本機git
images=(etcd-amd64:3.0.17 pause-amd64:3.0 kube-proxy-amd64:v1.7.2 kube-scheduler-amd64:v1.7.2 kube-controller-manager-amd64:v1.7.2 kube-apiserver-amd64:v1.7.2 kubernetes-dashboard-amd64:v1.6.1 k8s-dns-sidecar-amd64:1.14.4 k8s-dns-kube-dns-amd64:1.14.4 k8s-dns-dnsmasq-nanny-amd64:1.14.4) for imageName in ${images[@]} ; do docker pull cloudnil/$imageName docker tag cloudnil/$imageName gcr.io/google_containers/$imageName docker rmi cloudnil/$imageName done
根據提示從dockerhub下載對應版本的鏡像 須要到官方網站上查看已經有哪些版本 docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd-amd64:3.2.18 docker pull coredns/coredns:1.1.3 把下載下來的鏡像的tag修改爲kubeadm指定的tag docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver:v1.12.1 docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
[root@node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables [root@node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" [root@node1 ~]# kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
停用所有運行中的容器 docker stop $(docker ps -q) 刪除所有容器 docker rm $(docker ps -aq) 一條命令實現停用並刪除容器 docker stop $(docker ps -q) & docker rm $(docker ps -aq) 想要刪除untagged images,也就是那些id爲的image的話能夠用 docker rmi $(docker images | grep "^<none>" | awk "{print $3}") 要刪除所有image的話 docker rmi $(docker images -q) 強制刪除所有image的話 docker rmi -f $(docker images -q)
# -*- coding: UTF-8 -*- import re import os import subprocess if __name__ == "__main__": os.system('rm -rf /tmp/k8s') os.system('mkdir -p /tmp/k8s') for line in p.stdout.readlines(): # 此處的正則表達式是爲了匹配鏡像名以k8s爲開頭的鏡像 # 實際使用中根據須要自行調整 m = re.match(r'(^k8s[^\s]*\s*)\s([^\s]*\s)', line.decode("utf-8")) if not m: continue # 鏡像名 iname = m.group(1).strip(' ') # tag itag = m.group(2) # tar包的名字 tarname = iname.split('/')[-1] print(tarname) tarball = tarname + '.tar' ifull = iname + ':' + itag #save cmd = 'docker save -o ' + tarball + ' ' + ifull os.system(cmd) # 將tar包放在臨時目錄 os.system('mv %s /tmp/k8s/'%tarball)
執行 python save.pygithub
import os import sys if __name__ == "__main__": #tarball = sys.argv[1] #print(tarball) workdir = '/tmp/k8s' #os.system('rm -rf %s'%workdir) #os.system('mkdir -p %s'%workdir) #os.system('tar -zxvf %s -C %s'%(tarball, workdir)) os.chdir(workdir) files = os.listdir(workdir) for filename in files: print(filename) os.system('docker load -i %s'%filename)
能夠把全部的鏡像包壓縮成一個tar包上傳到工做目錄 執行python dockerload.py Images.tar.gz正則表達式
也能夠把全部的鏡像tar包分別上傳到工做目錄 執行 python dockerload.py便可docker
kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swapbootstrap
[init] using Kubernetes version: v1.12.1 [preflight] running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.134] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.11.134 127.0.0.1 ::1] [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 33.015497 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation [bootstraptoken] using token: 8955q6.o8fopwcef7i3fn4w [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.11.134:6443 --token 8955q6.o8fopwcef7i3fn4w --discovery-token-ca-cert-hash sha256:a3ec67c68a14b2e587aecb016c9050c36dc579069725eb261b27a0cd1eb024dc
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@node1 ~]# mkdir -p $HOME/.kube [root@node1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@node1 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
目前kubernetes v1.12.0初始化後沒法啓動flannel 容器 形成全部節點一直處於Not Ready狀態 改爲部署v1.11.1 成功 部署過程以下centos
1.安裝dockerapi
wget -c https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm wget -c https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm yum install -y docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm [root@k8s-master ~]# vi /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd systemctl daemon-reload && systemctl enable docker && systemctl start docker
2.鏡像準備
把下載好的k8s須要的鏡像導入到本機docker中
[root@k8s-master images]# ls coredns.tar kube-apiserver-amd64.tar kube-proxy-amd64.tar pause.tar etcd-amd64.tar kube-controller-manager-amd64.tar kube-scheduler-amd64.tar flannel.tar kube-flannel.yml master.sh
3.添加kubeadm的yum源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 EOF # 列出最新版本 yum list kubeadm --showduplicates #目前最新版本爲v1.12.0 安裝v.1.12.0版本有問題 必須指定安裝v.11.1 yum install kubectl-1.11.1 yum install kubelet-1.11.1 yum install kubeadm-1.11.1 kubelet額外參數設置 確保kubelet和docker使用同一個cgroupdriver, 這裏使用systemd; vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice" systemctl enable kubelet && systemctl start kubelet
4.初始化集羣
kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 增長kubectl權限訪問: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 配置 k8s 網絡– flannel wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml 若是有多張網卡,須要在kube-flannel.yml中使用–iface參數指定集羣主機內網網卡的名稱 相似 : args: - --ip-masq - --kube-subnet-mgr - --iface=eth1
5.驗證集羣master啓動結果
[root@k8s-master ~]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-8fprv 1/1 Running 0 24m kube-system coredns-78fcdf6894-rdrm7 1/1 Running 0 24m kube-system etcd-k8s-master 1/1 Running 0 23m kube-system kube-apiserver-k8s-master 1/1 Running 0 23m kube-system kube-controller-manager-k8s-master 1/1 Running 0 23m kube-system kube-flannel-ds-amd64-xh8ln 1/1 Running 0 22m kube-system kube-proxy-l2tbn 1/1 Running 0 24m kube-system kube-scheduler-k8s-master 1/1 Running 0 23m [root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 29m v1.11.1 [root@k8s-master ~]# 出現以上結果表示所有正常
6.向集羣中增長節點
1.安裝docker 步驟和master同樣
2.配置kubeadm的yum源
3.把本地鏡像導入docker
[root@node2 ~]# docker load -i flannel.tar [root@node2 ~]# docker load -i kube-proxy-amd64.tar [root@node2 ~]# docker load -i pause.tar
[root@node2 ~]# vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice" [root@node2 ~]# vi /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd systemctl daemon-reload && systemctl enable docker && systemctl start docker
[root@node2 ~]# kubeadm join 192.168.11.141:6443 --token rumavf.aphute2alfbnqj8x --discovery-token-ca-cert-hash sha256:f51ee790dbef8e1f1e36315ba99bc476b0315bb4a861b3d6caef9cbc8b411397 [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I1004 04:55:53.603066 2118 kernel_validator.go:81] Validating kernel version I1004 04:55:53.603140 2118 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.11.141:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.11.141:6443" [discovery] Requesting info from "https://192.168.11.141:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.11.141:6443" [discovery] Successfully established connection with API Server "192.168.11.141:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@k8s-master images]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 47m v1.11.1 node2 Ready <none> 2m v1.11.1