1、kubenetes搭建方式有三種:html
一、minikube (一般在測試環境使用,不要在生產環境中使用)node
二、kubeadm (是一種快速部署kubernetes的方式,部署相對簡單,能夠在生產環境中應用)linux
三、二進制方式安裝kubernetes (安裝過程複雜,比較容易踩坑)nginx
2、使用kubeadm方式安裝kubernetes:
git
一、環境: github
IP地址 | 主機名 |
192.168.1.100 | k8s-master |
192.168.1.101 | k8s-node1 |
虛擬機配置:操做系統:CentOS7.5docker
CPU最好2核心數以上bootstrap
內存最好2GB以上 centos
二、部署前的條件api
2.一、關閉防火牆:
1 systemctl disable firewalld #關閉防火牆開機自啓 2 systemctl stop firewalld #關閉防火牆
2.二、關閉selinux
1 setenforce o #暫時關閉selinux
2.三、關閉swap
1 swapoff -a
2.四、建立k8s配置文件
1 vi /etc/sysctl.d/k8s.conf 2 3 net.bridge.bridge-nf-call-ip6tables = 1 4 net.bridge.bridge-nf-call-iptables = 1 5 net.ipv4.ip_forward = 1 6 #使配置文件生效 7 modprobe br_netfilter 8 sysctl -p /etc/sysctl.d/k8s.conf
三、安裝docker-ce
1 yum install -y yum-utils device-mapper-persistent-data lvm2 #安裝docker依賴包 2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #添加docker的yum倉庫repo源 3 yum list docker-ce.x86_64 --showduplicates |sort -r #查看docker-ce的各個版本
4 5 * updates: mirrors.huaweicloud.com 6 Loading mirror speeds from cached hostfile 7 Loaded plugins: fastestmirror 8 Installed Packages 9 * extras: mirrors.huaweicloud.com 10 docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable 11 docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable 12 docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable 13 docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable 14 docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable 15 docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable 16 docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable 17 docker-ce.x86_64 18.06.1.ce-3.el7 @docker-ce-stable 18 docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable 19 docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable 20 docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable 21 docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable 22 docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable 23 docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable 24 docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable 25 docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable 26 docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable 27 docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable 28 docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable 29 docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable 30 docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable 31 docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable 32 * base: mirrors.huaweicloud.com 33 Available Packages 34 35 36 yum makecache fast #快速創建緩存 37 yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7 #安裝docker 38 systemctl start docker #啓動docker 39 systemctl enable docker #將docker加入開機自啓
四、建立kubernetes的repo源並安裝kubernetes
1 #建立kubernetes的repo源 2 cat > /etc/yum.repos.d/k8s.repo <<EOF 3 [kubernetes] 4 name=Kubernetes 5 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 6 enabled=1 7 gpgcheck=0 8 EOF 9 #快速創建緩存 10 yum makecache fast 11 #安裝kubernetes各組件 12 yum install -y kubelet kubeadm kubectl 13 #編輯配置文件 14 vi /etc/sysconfig/kubelet 15 KUBELET_EXTRA_ARGS=--fail-swap-on=false #修改啓動kubernetes時必須關閉swap的規則 16 17 #將kubelet加入開機自啓 18 systemctl enable kubelet.servic
至此以上操做在k8s-master和k8s-node1兩臺機器上執行
至此如下均在k8s-master上執行
4.一、修改kubernetes組件的配置
1 #使用kubeadm獲取默認配置文件kubeadm.conf 2 kubeadm config print-default > kubeadm.conf 3 4 #修改kubeadm初始化時pull得鏡像的網站,默認爲google的鏡像網站,因爲國內需酸酸乳纔可訪問,故不能酸酸乳的小夥伴要按下面命令將其改成阿里雲的鏡像倉庫網址 5 6 sed -i "s/imageRepository: .*/imageRepository: registry.aliyuncs.com\/google_containers/g" kubeadm.conf 7 8 #指定kubernetes的版本 9 sed -i "s/kubernetesVersion: .*/kubernetesVersion: v1.13.4/g" kubeadm.conf 10 11 #使用kubeadm獲取初始化時所需的各組件鏡像 12 kubeadm config images pull --config kubeadm.conf 13 14 #指定master節點IP 15 sed -i "s/advertiseAddress: .*/advertiseAddress: 192.168.1.100/g" kubeadm.conf 16 #指定pod容器IP網段 17 sed -i "s/podSubnet: .*/podSubnet: \"10.244.0.0\/16\"/g" kubeadm.conf 18 #初始化kubeadm 19 kubeadm init --config kubeadm.conf 20 21 #若提示必須關閉swap的告警時,運行下面命令,以忽略告警 22 kubeadm init --config kubeadm.conf --ignore-preflight-errors=Swap
#如下是初始化成功後輸出的結果:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.1.100:6443 --token lb7b9b.mnb0oe0su1rtemnm --discovery-token-ca-cert-hash sha256:2ec77ce65a291770f6fcf42b60fc5b2200a8a381d46ce2b1bf7ec73310a95727
注:以上kubeadm join... 這條信息很重要,之後其餘節點都是用這條命令才能加入這個集羣
若是忘記請用如下命令查看:kubeadm token create --print-join-command
初始化成功後不要忘記執行提示的命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
若是初始化成功後虛擬機卡頓運行緩慢,須要給虛擬機內存適量調高
4.二、查看集羣是否健康
1 kubectl get cs 2 NAME STATUS MESSAGE ERROR 3 scheduler Healthy ok 4 controller-manager Healthy ok 5 etcd-0 Healthy {"health": "true"}
五、配置flannel網絡
1 mkdir -p ~/k8s/ 2 cd ~/k8s 3 4 #獲取flannel的yml文件 5 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 6 7 #執行flannel的yml文件使之運行 8 kubectl apply -f kube-flannel.yml 9 10 #查看flannel的運行狀態 11 kubectl get ds -l app=flannel -n kube-system 12 NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 13 kube-flannel-ds-amd64 2 2 2 1 2 beta.kubernetes.io/arch=amd64 22h 14 kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 22h 15 kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 22h 16 kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 22h 17 kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 22h
18 #查看節點狀態
19 kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 23h v1.13.4
#咱們看到master節點的狀態並非Ready狀態
#是由於kubeadm額外給node1節點設置了一個污點(Taint):node.kubernetes.io/not-ready:NoSchedule,很容易理解
#即若是節點尚未ready以前,是不接受調度的。
#但是若是Kubernetes的網絡插件尚未部署的話,節點是不會進入ready狀態的。
#所以咱們修改如下kube-flannel.yaml的內容,加入對node.kubernetes.io/not-ready:NoSchedule這個污點的容忍:
vi ~/k8s/kube-flannel.yaml
#修改段:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoSchedule
從新apply一下kubectl apply -f kube-flannel.yml,此次成功完成flannel的部署了。
節點狀態就是Ready狀態了。
六、使用kubectl get pod –all-namespaces -o wide確保全部的Pod都處於Running狀態。
1 kubectl get pod --all-namespaces -o wide 2 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE 3 kube-system coredns-576cbf47c7-njt7l 1/1 Running 0 12m 10.244.0.3 node1 <none> 4 kube-system coredns-576cbf47c7-vg2gd 1/1 Running 0 12m 10.244.0.2 node1 <none> 5 kube-system etcd-node1 1/1 Running 0 12m 192.168.61.11 node1 <none> 6 kube-system kube-apiserver-node1 1/1 Running 0 12m 192.168.61.11 node1 <none> 7 kube-system kube-controller-manager-node1 1/1 Running 0 12m 192.168.61.11 node1 <none> 8 kube-system kube-flannel-ds-amd64-bxtqh 1/1 Running 0 2m 192.168.61.11 node1 <none> 9 kube-system kube-proxy-fb542 1/1 Running 0 12m 192.168.61.11 node1 <none> 10 kube-system kube-scheduler-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>
七、使master節點參與工做負載
1 kubectl describe node node1 | grep Taint 2 Taints: node-role.kubernetes.io/master:NoSchedule
由於這裏搭建的是測試環境,去掉這個污點使node1參與工做負載:
1 kubectl taint nodes node1 node-role.kubernetes.io/master- 2 node "node1" untainted
八、測試保證全部pod都在Running狀態
1 [root@K8s-master ~]# kubectl get pod --all-namespaces -o wide 2 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 3 default curl-66959f6557-6qvpz 1/1 Running 1 23h 10.244.0.6 k8s-master <none> <none> 4 default nginx-7cdbd8cdc9-nvkcl 1/1 Running 0 5h56m 10.244.1.2 k8s-node1 <none> <none> 5 kube-system coredns-78d4cf999f-2zg4q 1/1 Running 1 23h 10.244.0.5 k8s-master <none> <none> 6 kube-system coredns-78d4cf999f-snnkz 1/1 Running 1 23h 10.244.0.7 k8s-master <none> <none> 7 kube-system etcd-k8s-master 1/1 Running 2 23h 192.168.1.100 k8s-master <none> <none> 8 kube-system kube-apiserver-k8s-master 1/1 Running 6 23h 192.168.1.100 k8s-master <none> <none> 9 kube-system kube-controller-manager-k8s-master 1/1 Running 7 23h 192.168.1.100 k8s-master <none> <none> 10 kube-system kube-flannel-ds-amd64-bb6m8 1/1 Running 1 23h 192.168.1.100 k8s-master <none> <none> 11 kube-system kube-flannel-ds-amd64-px2fv 1/1 Running 1 22h 192.168.1.101 k8s-node1 <none> <none> 12 kube-system kube-proxy-bfgq4 1/1 Running 2 23h 192.168.1.100 k8s-master <none> <none> 13 kube-system kube-proxy-p2hqr 1/1 Running 1 22h 192.168.1.101 k8s-node1 <none> <none> 14 kube-system kube-scheduler-k8s-master 1/1 Running 7 23h 192.168.1.100 k8s-master <none> <none>
九、測試DNS
[root@K8s-master ~]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. If you don't see a command prompt, try pressing enter.
#進入後執行nslookup kubernetes.default確認解析正常:
[ root@curl-66959f6557-6qvpz:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-66959f6557-6qvpz:/ ]$
十、向Kubernetes集羣中添加Node節點
1 [root@K8s-master ~]# kubeadm join 192.168.1.100:6443 --token istyp6.rzgpkpjpv0l3b5f8 --discovery-token-ca-cert-hash sha256:2ec77ce65a291770f6fcf42b60fc5b2200a8a381d46ce2b1bf7ec73310a95727 --ignore-preflight-errors=Swap 2 3 [preflight] running pre-flight checks 4 [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] 5 you can solve this problem with following methods: 6 1. Run 'modprobe -- ' to load missing kernel modules; 7 2. Provide the missing builtin kernel ipvs support 8 9 [WARNING Swap]: running with swap on is not supported. Please disable swap 10 [discovery] Trying to connect to API Server "192.168.61.11:6443" 11 [discovery] Created cluster-info discovery client, requesting info from "https://192.168.61.11:6443" 12 [discovery] Requesting info from "https://192.168.61.11:6443" again to validate TLS against the pinned public key 13 [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.61.11:6443" 14 [discovery] Successfully established connection with API Server "192.168.61.11:6443" 15 [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace 16 [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 17 [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 18 [preflight] Activating the kubelet service 19 [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... 20 [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation 21 22 This node has joined the cluster: 23 * Certificate signing request was sent to apiserver and a response was received. 24 * The Kubelet was informed of the new secure connection details. 25 26 Run 'kubectl get nodes' on the master to see this node join the cluster.
十一、查看集羣中的節點:
kubectl get nodes NAM E STATUS ROLES AGE VERSION k8s-master Ready master 26m v1.13.4 k8s-node1 Ready <none> 2m v1.13.4
十二、若是須要從集羣中移除node2這個Node執行下面的命令:
在master節點上執行:
kubectl drain node1 --delete-local-data --force --ignore-daemonsets kubectl delete node node1
在node1上執行:
1 kubeadm reset 2 ifconfig cni0 down 3 ip link delete cni0 4 ifconfig flannel.1 down 5 ip link delete flannel.1 6 rm -rf /var/lib/cni/
因爲時間有限,不保證沒有意外會出錯,請多多查閱相關技術文檔,以保證正常運行。
原文出處:https://www.cnblogs.com/Smbands/p/10520142.html