基於Kubeadm搭建Kubernetes v1.11.2全記錄
所需鏡像
k8s.gcr.io/pause:3.1
k8s.gcr.io/coredns
quay.io/coreos/flannel:v0.10.0-amd64
k8s.gcr.io/etcd-amd64:3.2.18
k8s.gcr.io/kube-proxy-amd64:v1.11.2
k8s.gcr.io/kube-apiserver-amd64:v1.11.2
k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
k8s.gcr.io/kube-scheduler-amd64:v1.11.2
監控
gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
gcr.io/google_containers/heapster-amd64:v1.5.3
quay.io/calico/cni:v2.0.5
quay.io/calico/kube-controllers:v2.0.4node
本文主要目的在於記錄我的在配置K8S集羣的步驟,以及遇到的問題和相應的解決方案,內容從集羣搭建到Kubernetes-Dashboard安裝,角色權限配置爲止。linux
先簡單介紹下環境,4個節點的狀況以下:
節點名 IP OS 安裝軟件
Master 10.211.55.6 Centos7 kubeadm,kubelet,kubectl,docker
Node1 10.211.55.7 Centos7 kubeadm,kubelet,kubectl,docker
Node2 10.211.55.8 Centos7 kubeadm,kubelet,kubectl,docker
Node3 10.211.55.9 Centos7 kubeadm,kubelet,kubectl,docker
其中kubeadm,kubectl,kubelet的版本爲v1.10.0,docker的版本爲1.13.1。git
一.各節點前期的準備工做:github
1.關閉並停用防火牆
systemctl stop firewalld.service
systemctl disable firewalld.service
2.永久關閉SELinux
vim /etc/selinux/config
SELINUX=disabled
3.同步集羣系統時間
yum -y install ntp
ntpdate 0.asia.pool.ntp.org
4優化
設置ipv4轉發:
vim /etc/sysctl.d/k8s.conf:增長一行 net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.d/k8s.conf
centos7下net-bridge設置:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
5.重啓機器
rebootweb
二.軟件安裝與配置:docker
注意⚠️:軟件源按需配置,下面給出3個源,其中kubernetes yum源必須配置,docker源若是須要安裝docker-ce版本則須要安裝,不然最高支持1.13.1版本。bootstrap
#阿里雲yum源: wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo yum clean all yum makecache
#docker yum源 cat >> /etc/yum.repos.d/docker.repo <<EOF [docker-repo] name=Docker Repository baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7 enabled=1 gpgcheck=0 EOF
#kubernetes yum源 cat >> /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF
配置完源,安裝軟件:
yum -y install docker kubeadm kubelet kubectl ebtables
關閉SWAP
swapoff -a
啓動docker並設爲開機啓動:
systemctl start docker
systemctl enable docker
參數配置:
kubelet的cgroup驅動參數須要和docker使用的一致,先查詢下docker的cgroup驅動參數:
docker info |grep cgroup
在docker v1.13.1下,該參數默認爲systemd,因此更改kubelet的配置參數:
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
載入配置,啓動kubelet:
systemctl daemon-reload
systemctl start kubelet
注意⚠️:在啓動kubeadm以前,必定要先啓動kubelet,不然會顯示鏈接不上。vim
下面開始,分別操做Master節點和Node節點:
啓動Master節點:
kubeadm init --kubernetes-version=1.10.0 --token-ttl 0 --pod-network-cidr=10.244.0.0/16
以上報錯的話執行下面的
kubeadm init --kubernetes-version=1.9.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.20.31 --node-name=node主機名centos
該命令表示kubenetes集羣版本號爲v1.10.0,token的有效時間爲0表示永久有效,容器的網絡段爲10.244.0.0/16,因爲kubeadm安裝方式只能用於創建最小可用集羣,因此不少addon是沒有集成的,包括網絡插件,須要以後安裝,但網段參數須要先行配置。api
注意⚠️:kubenetes目前的版本較之老版本,最大區別在於核心組件都已經容器化,因此安裝的過程是會自動pull鏡像的,可是因爲鏡像基本都存放於谷歌的服務器,牆內用戶是沒法下載,致使安裝進程卡在[init] This often takes around a minute; or longer if the control plane images have to be pulled ,這裏我提供兩個思路:
1.有個牆外的代理服務器,對docker配置代理,需修改/etc/sysconfig/docker文件,添加:
HTTP_PROXY=http://proxy_ip:port
http_proxy=$HTTP_PROXY
重啓docker:systemctl restart docker
2.事先下載好全部鏡像,下面我給出v1.10.0版本基本安裝下所須要的全部鏡像(其餘版本所需的鏡像版本可能不一樣,以官方文檔爲準):
Master節點所需鏡像:
k8s.gcr.io/kube-apiserver-amd64:v1.10.0k8s.gcr.io/kube-scheduler-amd64:v1.10.0k8s.gcr.io/kube-controller-manager-amd64:v1.10.0k8s.gcr.io/kube-proxy-amd64:v1.10.0k8s.gcr.io/etcd-amd64:3.1.12k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8k8s.gcr.io/pause-amd64:3.1quay.io/coreos/flannel:v0.9.1-amd64 (爲網絡插件的鏡像,這裏選擇flannel爲網絡插件)
Node節點所需鏡像:
k8s.gcr.io/kube-proxy-amd64:v1.10.0k8s.gcr.io/pause-amd64:3.1quay.io/coreos/flannel:v0.9.1-amd64(爲網絡插件的鏡像,這裏選擇flannel爲網絡插件)
Master節點安裝成功會輸出以下內容:
[init] Using Kubernetes version: v1.10.0
...
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 39.511972 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value:node-role.kubernetes.io/master=""
[bootstraptoken] Using token: <token>
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.211.55.6:6443 --token 63nuhu.quu72c0hl95hc82m --discovery-token-ca-cert-hash sha256:3971ae49e7e5884bf191851096e39d8e28c0b77718bb2a413638057da66ed30a
其中
kubeadm join 10.211.55.6:6443 --token 63nuhu.quu72c0hl95hc82m --discovery-token-ca-cert-hash sha256:3971ae49e7e5884bf191851096e39d8e28c0b77718bb2a413638057da66ed30a
是後續節點加入集羣的啓動命令,因爲設置了--token-ttl 0,因此該命令永久有效,需保存好,kubeadm token list命令能夠輸出token,但不能輸出完整命令,須要作hash轉換。
注意⚠️:集羣啓動後要獲取集羣的使用權限,不然在master節點執行kubectl get nodes命令,會反饋localhost:8080 connection refused,獲取權限方法以下:
Root用戶: export KUBECONFIG=/etc/kubernetes/admin.conf
非Root用戶: mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
三.安裝網絡插件Pod:
在成功啓動Master節點後,在添加node節點以前,須要先安裝網絡管理插件,kubernetes可供選擇的網絡插件有不少,
如Calico,Canal,flannel,Kube-router,Romana,Weave Net
各類安裝教程能夠參考官方文檔,點擊這裏
本文選擇flannel做爲網絡插件:
vim /etc/sysctl.conf,添加如下內容
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
修改後,及時生效
sysctl -p
執行安裝:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
安裝完成後,執行:
kubectl get pods --all-namespaces
查看Pod的啓動狀態,一旦kube-dns Pod的啓動狀態爲UP或者Running,集羣就能夠開始添加節點了。
四.添加Node節點:
啓動Node節點加入集羣只需啓動kubelet,而後執行以前保存的命令:
systemctl start kubelet
kubeadm join 10.211.55.6:6443 --token 63nuhu.quu72c0hl95hc82m --discovery-token-ca-cert-hash sha256:3971ae49e7e5884bf191851096e39d8e28c0b77718bb2a413638057da66ed30a
節點成功加入集羣。
注意⚠️:集羣啓動後要獲取集羣的使用權限,不然在master節點執行kubectl get nodes命令,會反饋localhost:8080 connection refused,獲取權限方法以下:
Root用戶:export KUBECONFIG=/etc/kubernetes/admin.conf
非Root用戶:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
執行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
而後執行
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
在主節點執行kubectl get nodes,驗證集羣狀態,顯示以下:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 7h v1.10.0
node1 Ready <none> 6h v1.10.0
node2 Ready <none> 2h v1.10.0
node3 Ready <none> 4h v1.10.0
Kubenetes v1.10.0 集羣構建完成!
五.Kubernetes-Dashboard(WebUI)的安裝:
和網絡插件的用法同樣,dashboard也是一個容器應用,一樣執行安裝yaml:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
能夠參考官方文檔,點擊這裏。
安裝完成後,執行:
kubectl get pods --all-namespaces
查看Pod的啓動狀態,kubernetes-dashboard啓動完成後,執行:
kubectl proxy --address=10.211.55.6 --accept-hosts='^*$'
基本參數是address爲master節點的IP,access-host若是不填,打開web頁面會返回:
<h3>unauthorized<h3>
啓動後控制檯輸出:
Starting to serve on 10.211.55.6:8001
打開WebUI:
http://10.211.55.6:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default
見以下頁面:
這是須要用一個可用的ClusterRole進行登陸,該帳戶須要有相關的集羣操做權限,若是跳過,則是用默認的系統角色kubernetes-dashboard(該角色在建立該容器時生成),初始狀態下該角色沒有任何權限,須要在系統中進行配置,角色綁定:
在主節點上任意位置建立一個文件xxx.yaml,名字隨意:
vim ClusterRoleBinding.yaml
編輯文件:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubernetes-dashboard
subjects:
保存,退出,執行該文件:
kubectl create -f ClusterRoleBinding.yaml
再次打開WebUI,成功顯示集羣信息:
注意⚠️:給kubernetes-dashboard角色賦予cluster-admin權限僅供測試使用,自己這種方式並不安全,建議新建一個系統角色,分配有限的集羣操做權限,方法以下:
新建一個yaml文件,寫入:
kind: ClusterRole #建立集羣角色
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dashboard #角色名稱
rules:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dashboard-extended
subjects:
-kind: ServiceAccount
name: dashboard
namespace: kube-system
roleRef:
kind: ClusterRole
name:dashboard #填寫cluster-admin表明開放所有權限
apiGroup: rbac.authorization.k8s.io
執行該文件,查看角色是否生成:
kubectl get serviceaccount --all-namespaces
查詢該帳戶的密鑰名:
kubectl get secret -n kube-system
根據密鑰名找到token:
kubectl discribe secret dashboard-token-wd9rz -n kube-system
輸出一段信息:
將此token用於登錄WebUI便可。
以上即是Kubeadm安裝K8S v1.10.0版本的全記錄,本文用於總結與梳理,參考於官方文檔,若有錯漏,望予指正。