安裝是使用Vmware虛擬機下進行,操做系統是CentOS7 64位。規劃是使用三臺虛擬機搭建k8s的集羣,網絡使用NAT模式。三臺的ip分別爲:html
docker的版本是18以上,我啓用了ce版本,因此實際的版本號是18.06.3-ce k8s的版本是v1.15.0node
docker自己對於環境沒有太大要求,因此下面都是針對k8s的linux
# 將 SELinux 設置爲 permissive 模式(將其禁⽤用) setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # 關閉swap swapoff -a cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S') sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab systemctl daemon-reload # 關閉防火牆,這是最簡單的處理方式,當使用的網絡環境是VPC時,內部網絡其實是安全的 systemctl stop firewalld systemctl disable firewalld # 設置底層的網絡轉發參數 echo "net.bridge.bridge-nf-call-ip6tables = 1" >>/etc/sysctl.conf echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf sysctl -p modprobe br_netfilter echo "modprobe br_netfilter" >> /etc/rc.local sysctl -p # 設置kubernetes的倉庫源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum install docker-ce kubelet kubeadm kubectl systemctl enable docker && systemctl restart docker systemctl enable kubelet && systemctl start kubelet
注意:CentOS默認安裝的版本較低(好比我默認安裝就是13的版本號),因此當你須要最新版本的時候,還須要卸載重裝最新的docker。 2017年的3月1號以後,Docker的版本命名開始發生變化,同時將CE版本和EE版本進行分開。 區別以下: Docker社區版(CE):爲了開發人員或小團隊建立基於容器的應用,與團隊成員分享和自動化的開發管道。docker-ce提供了簡單的安裝和快速的安裝,以即可以當即開始開發。docker-ce集成和優化,基礎設施。(免費) Docker企業版(EE):專爲企業的發展和IT團隊創建誰。docker-ee爲企業提供最安全的容器平臺,以應用爲中心的平臺。(付費)git
# 中止服務 systemctl stop docker # 刪除docker相關 yum erase docker \ docker-* # 添加阿里雲的鏡像倉庫,docker.io的是很慢的 yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 安裝啓動docker yum install docker-ce -y systemctl start docker systemctl enable docker
kubernetes是Google使用go語言開發的,因此默認是去獲取Google提供的鏡像。因此咱們須要找其餘的方式來拉取。 咱們寫了一個腳本 pull_images.sh,用來從 hub.docker.com 拉取鏡像。github
#!/bin/bash gcr_name=k8s.gcr.io hub_name=mirrorgooglecontainers # define images images=( kubernetes-dashboard-amd64:v1.10.1 kube-apiserver:v1.15.0 kube-controller-manager:v1.15.0 kube-scheduler:v1.15.0 kube-proxy:v1.15.0 pause:3.1 etcd:3.3.10 ) for image in ${images[@]}; do docker pull $hub_name/$image docker tag $hub_name/$image $gcr_name/$image docker rmi $hub_name/$image done docker pull coredns/coredns:1.3.1 docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 docker rmi coredns/coredns:1.3.1
原理其實也很簡單,就是從拉下來以後從新tag,標記爲k8s.gcr.io域下。 完成以後以下:docker
[root@k8s-master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.15.0 d235b23c3570 4 weeks ago 82.4MB k8s.gcr.io/kube-apiserver v1.15.0 201c7a840312 4 weeks ago 207MB k8s.gcr.io/kube-controller-manager v1.15.0 8328bb49b652 4 weeks ago 159MB k8s.gcr.io/kube-scheduler v1.15.0 2d3813851e87 4 weeks ago 81.1MB k8s.gcr.io/coredns 1.3.1 eb516548c180 6 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 7 months ago 258MB k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 10 months ago 122MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 19 months ago 742kB
到如今實際上k8s安裝已經完成,咱們要初始化master節點了,直接執行命令shell
# 因爲咱們使用虛擬機,基本上是一個cpu,而k8s建議cpu要有2個,因此咱們忽略這個錯誤 kubeadm init --kubernetes-version v1.15.0 --pod-network-cidr 10.244.0.0/16 --ignore-preflight-errors=NumCPU
執行以後要注意下面的信息,提示咱們集羣搭建的後續操做centos
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.91.132:6443 --token zf26v4.3u5z3g09ekm4owt3 \ --discovery-token-ca-cert-hash sha256:fce98cb6779dbcc73408d1faad50c9d8f86f154ed88a5380c08cece5e08aba58
只要在你的node節點執行對應join命令便可api
kubeadm join 192.168.91.132:6443 --token zf26v4.3u5z3g09ekm4owt3 \ --discovery-token-ca-cert-hash sha256:fce98cb6779dbcc73408d1faad50c9d8f86f154ed88a5380c08cece5e08aba58
執行以後在master節點上執行 kubectl get nodes
,能夠看到安全
NAME STATUS ROLES AGE VERSION k8s-master Ready master 21h v1.15.0 k8s-node1 Ready <none> 127m v1.15.0
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml kubectl proxy
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
完成
使用journalctl -f
查看日誌
-- The start-up result is done. Jul 19 10:27:34 k8s-node1 kubelet[9831]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 19 10:27:34 k8s-node1 kubelet[9831]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 19 10:27:34 k8s-node1 kubelet[9831]: I0130 10:27:34.877299 9831 server.go:407] Version: v1.15.0 Jul 19 10:27:34 k8s-node1 kubelet[9831]: I0130 10:27:34.877538 9831 plugins.go:103] No cloud provider specified. Jul 19 10:27:34 k8s-node1 kubelet[9831]: I0130 10:27:34.892361 9831 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 19 10:27:34 k8s-node1 kubelet[9831]: I0130 10:27:34.926248 9831 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / Jul 19 10:27:34 k8s-node1 kubelet[9831]: F0130 10:27:34.926665 9831 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority /swapfile file 2097148 0 -2] Jul 19 10:27:34 k8s-node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a Jul 19 10:27:34 k8s-node1 systemd[1]: Unit kubelet.service entered failed state. Jul 19 10:27:34 k8s-node1 systemd[1]: kubelet.service failed.
剛開始覺得是Flag --cgroup-driver has been deprecated這一段,後來才發現應該是**failed to run Kubelet: Running with swap on is not supported, please disable swap!**這一段,這個日誌就很明顯了。 解決方案就是關閉Swap,重啓後生效
swapoff -a cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S') sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab systemctl daemon-reload systemctl restart kubelet
因此我把這段加到了上面
例如:
[root@k8s-master ~]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5c98db65d4-b2rgr 1/1 Running 0 20h kube-system coredns-5c98db65d4-l6x97 1/1 Running 0 20h kube-system etcd-k8s-master 1/1 Running 4 20h kube-system kube-apiserver-k8s-master 1/1 Running 15 20h kube-system kube-controller-manager-k8s-master 1/1 Running 27 20h kube-system kube-flannel-ds-amd64-k5kjg 1/1 Running 2 110m kube-system kube-flannel-ds-amd64-z7lcn 1/1 Running 20 88m kube-system kube-proxy-992ql 1/1 Running 4 20h kube-system kube-proxy-ss9r6 1/1 Running 0 27m kube-system kube-scheduler-k8s-master 1/1 Running 29 20h kube-system kubernetes-dashboard-7d75c474bb-s7fwq 0/1 ErrImagePull 0 102s
最後kubernetes-dashboard-7d75c474bb-s7fwq顯示ErrImagePull。 咱們執行命令kubectl describe pod kubernetes-dashboard-7d75c474bb-s7fwq -n kube-system
(注意必定要加 -n 指定命名空間,不然會以default空間,會出現 Error from server (NotFound): pods "xxxxxxxx" not found) 最後Events的部分以下:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 119s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-7d75c474bb-s7fwq to k8s-node1 Normal Pulling 50s (x3 over 118s) kubelet, k8s-node1 Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" Warning Failed 33s (x3 over 103s) kubelet, k8s-node1 Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 33s (x3 over 103s) kubelet, k8s-node1 Error: ErrImagePull Normal BackOff 6s (x4 over 103s) kubelet, k8s-node1 Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" Warning Failed 6s (x4 over 103s) kubelet, k8s-node1 Error: ImagePullBackOff
從這個日誌發現問題是kubernetes-dashboard-amd64:v1.10.1要求的版本比目前docker中的版本高(目前的版本是v1.10.0),因此從新拉一個鏡像便可。
原文出處:https://www.cnblogs.com/pluto4596/p/11214975.html