centos7.3 kubernetes/k8s 1.10 離線安裝 --已驗證

本文介紹在centos7.3使用kubeadm快速離線安裝kubernetes 1.10。
採用單master,單node(能夠多node),佔用資源較少,方便在筆記本或學習環境快速部署,不適用於生產環境。

所需文件百度盤鏈接
連接:https://pan.baidu.com/s/1iQJpKZ9PdFjhz9yTgl0Wjg 密碼:gwmhhtml

1. 環境準備


主機名 IP 配置
master1 192.168.1.181 1C 4G
node1 192.168.1.182 2C 6G

準備不低於2臺虛機。 1臺 master,其他的作node
OS: Centos7.3 mini install。 最小化安裝。配置節點IPnode

分別設置主機名爲master1 node1 ... 時區linux

timedatectl set-timezone Asia/Shanghai  #都要執行
hostnamectl set-hostname master1   #master1執行
hostnamectl set-hostname node1    #node1執行

在全部節點/etc/hosts中添加解析,master1,node1nginx

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.181 matser1
192.168.1.182 node1

關閉全部節點的seliux以及firewalldchrome

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld

 

2. 安裝docker

使用文件docker-packages.tar,每一個節點都要安裝。docker

tar -xvf docker-packages.tar
cd docker-packages
rpm -Uvh * 或者 yum install local *.rpm  進行安裝
docker version        #安裝完成查看版本

啓動docker,並設置爲開機自啓json

systemctl start docker && systemctl enable docker

輸入bootstrap

docker info

==記錄Cgroup Driver==
Cgroup Driver: cgroupfs
docker和kubelet的cgroup driver須要一致,若是docker不是cgroupfs,則執行centos

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"]
}
EOF
systemctl daemon-reload && systemctl restart docker

 

2. 安裝kubeadm,kubectl,kubelet

使用文件kube-packages-1.10.1.tar,每一個節點都要安裝
kubeadm是集羣部署工具
kubectl是集羣管理工具,經過command來管理集羣
kubelet的k8s集羣每一個節點的docker管理服務api

tar -xvf kube-packages-1.10.1.tar
cd kube-packages-1.10.1
rpm -Uvh * 或者 yum install local *.rpm  進行安裝

在全部kubernetes節點上設置kubelet使用cgroupfs,與dockerd保持一致,不然kubelet會啓動報錯

默認kubelet使用的cgroup-driver=systemd,改成cgroup-driver=cgroupfs
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

重設kubelet服務,並重啓kubelet服務
systemctl daemon-reload && systemctl restart kubelet

關閉swap,及修改iptables,否則後面kubeadm會報錯

swapoff -a
vi /etc/fstab   #swap一行註釋
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

3. 導入鏡像

使用文件k8s-images-1.10.tar.gz,每一個節點都要執行
節點較少,就不搭建鏡像倉庫服務了,後續要用的應用鏡像,每一個節點都要導入

docker load -i k8s-images-1.10.tar.gz 
一共11個鏡像,分別是
k8s.gcr.io/etcd-amd64:3.1.12 
k8s.gcr.io/kube-apiserver-amd64:v1.10.1 
k8s.gcr.io/kube-controller-manager-amd64:v1.10.1 
k8s.gcr.io/kube-proxy-amd64:v1.10.1 
k8s.gcr.io/kube-scheduler-amd64:v1.10.1 
k8s.gcr.io/pause-amd64:3.1 
k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8  
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 
k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 
k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 
quay.io/coreos/flannel:v0.9.1-amd64 

4. kubeadm init 部署master節點

只在master執行。此處選用最簡單快捷的部署方案。etcd、api、controller-manager、 scheduler服務都會以容器的方式運行在master。etcd 爲單點,不帶證書。etcd的數據會掛載到master節點/var/lib/etcd
init部署是支持etcd 集羣和證書模式的,配置方法見https://www.cnblogs.com/felixzh/p/9726199.html,此處略過。

init命令注意要指定版本,和pod範圍

kubeadm init --kubernetes-version=v1.10.1 --pod-network-cidr=10.244.0.0/16

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.181:6443 --token wct45y.tq23fogetd7rp3ck --discovery-token-ca-cert-hash sha256:c267e2423dba21fdf6fc9c07e3b3fa17884c4f24f0c03f2283a230c70b07772f

 

記下join的命令,後續node節點加入的時候要用到
執行提示的命令,保存kubeconfig

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

此時執行kubectl get node 已經能夠看到master節點,notready是由於還未部署網絡插件

[root@master1 kubernetes1.10]# kubectl get node
NAME      STATUS     ROLES     AGE       VERSION
master1   NotReady   master    3m        v1.10.1

查看全部的pod,kubectl get pod --all-namespaces
kubedns也依賴於容器網絡,此時pending是正常的

[root@master1 kubernetes1.10]# kubectl get pod --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-master1                      1/1       Running   0          3m
kube-system   kube-apiserver-master1            1/1       Running   0          3m
kube-system   kube-controller-manager-master1   1/1       Running   0          3m
kube-system   kube-dns-86f4d74b45-5nrb5         0/3       Pending   0          4m
kube-system   kube-proxy-ktxmb                  1/1       Running   0          4m
kube-system   kube-scheduler-master1            1/1       Running   0          3m

配置KUBECONFIG變量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo $KUBECONFIG    #應該返回/etc/kubernetes/admin.conf

5. 部署flannel網絡

k8s支持多種網絡方案,flannel,calico,openvswitch
此處選擇flannel。 在熟悉了k8s部署後,能夠嘗試其餘網絡方案,我另一篇1.9部署中有介紹flannel和calico的方案,以及切換時須要的動做。

kubectl apply -f kube-flannel.yml

網絡就緒後,節點的狀態會變爲ready
[root@master1 kubernetes1.10]# kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
master1   Ready     master    18m       v1.10.1

6. kubeadm join 加入node節點

6.1 node節點加入集羣

使用以前kubeadm init 生產的join命令,加入成功後,回到master節點查看是否成功

kubeadm join 192.168.1.181:6443 --token wct45y.tq23fogetd7rp3ck --discovery-token-ca-cert-hash sha256:c267e2423dba21fdf6fc9c07e3b3fa17884c4f24f0c03f2283a230c70b07772f

[root@master1 kubernetes1.10]# kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
master1   Ready     master    31m       v1.10.1
node1     Ready     <none>    44s       v1.10.1

至此,集羣已經部署完成。

6.2 若是出現x509這個報錯

若是有報錯才須要作這一步,否則不須要。
這是由於master節點缺乏KUBECONFIG變量

[discovery] Failed to request cluster info, will try again: [Get https://192.168.1.181:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]

master節點執行

export KUBECONFIG=$HOME/.kube/config

node節點kubeadm reset 再join

kubeadm reset
kubeadm join  xxx ...

6.3 若是忘了join命令,加入節點方法

若node已經成功加入,忽略這一步。
使用場景:忘了保存上面kubeadm init生產的join命令,可按照下面的方法加入node節點。
首先master節點獲取token,若是token list內容爲空,則kubeadm token create建立一個,記錄下token數據

[root@master1 kubernetes1.10]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
wct45y.tq23fogetd7rp3ck   22h       2018-04-26T21:38:57+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

node節點執行以下,把token部分進行替換

kubeadm join --token wct45y.tq23fogetd7rp3ck 192.168.1.181:6443 --discovery-token-unsafe-skip-ca-verification

7. 部署k8s ui界面,dashboard

dashboard是官方的k8s 管理界面,能夠查看應用信息及發佈應用。dashboard的語言是根據瀏覽器的語言本身識別的
官方默認的dashboard爲https方式,若是用chrome訪問會拒絕。本次部署作了修改,方便使用,使用了http方式,用chrome訪問正常。
一共須要導入3個yaml文件

kubectl apply -f kubernetes-dashboard-http.yaml
kubectl apply -f admin-role.yaml
kubectl apply -f kubernetes-dashboard-admin.rbac.yaml

[root@master1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-http.yaml 
serviceaccount "kubernetes-dashboard" created
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
deployment.apps "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@master1 kubernetes1.10]# kubectl apply -f admin-role.yaml 
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
[root@master1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-admin.rbac.yaml 
clusterrolebinding.rbac.authorization.k8s.io "dashboard-admin" created

建立完成後,經過 http://任意節點的IP:31000便可訪問ui

 

8. FAQ

8.1 master節點默認不可部署pod

執行以下,node-role.kubernetes.io/master 能夠在 kubectl edit node master1中taint配置參數下查到

root@master1:/var/lib/kubelet# kubectl taint node master1 node-role.kubernetes.io/master-
node "master1" untainted
8.2 node節點pod沒法啓動/節點刪除網絡重置

node1以前反覆添加過,添加以前須要清除下網絡

root@master1:/var/lib/kubelet# kubectl get po -o wide
NAME                   READY     STATUS              RESTARTS   AGE       IP           NODE
nginx-8586cf59-6zw9k   1/1       Running             0          9m        10.244.3.3   node2
nginx-8586cf59-jk5pc   0/1       ContainerCreating   0          9m        <none>       node1
nginx-8586cf59-vm9h4   0/1       ContainerCreating   0          9m        <none>       node1
nginx-8586cf59-zjb84   1/1       Running             0          9m        10.244.3.2   node2
root@node1:~# journalctl -u kubelet
 failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-8586cf59-rm4sh_default" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24
12252 cni.go:227] Error while adding to cni network: failed to set bridge addr: "cni0" already

重置kubernetes服務,重置網絡。刪除網絡配置,link

kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker

加入節點

systemctl start docker

kubeadm join --token 55c2c6.2a4bde1bc73a6562 192.168.1.144:6443 --discovery-token-ca-cert-hash sha256:0fdf8cfc6fecc18fded38649a4d9a81d043bf0e4bf57341239250dcc62d2c832

如何從集羣中移除Node

若是須要從集羣中移除node2這個Node執行下面的命令:

在master節點上執行:

kubectl drain node2 --delete-local-data --force --ignore-daemonsets kubectl delete node node2

在node2上執行:

kubeadm reset
ifconfig cni0 down
ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/

在node1上執行:

kubectl delete node node2

參考文檔

https://www.jianshu.com/p/9c7e1c957752
https://kubernetes.io/docs/setup/independent/install-kubeadm/

https://www.kubernetes.org.cn/4619.html

Installing kubeadm

Using kubeadm to Create a Cluster

Get Docker CE for CentOS

相關文章
相關標籤/搜索