1、服務器環境準備node
192.168.247.128 : k8s-master、etcd、registrydocker
192.168.247.129 : k8s-nodeAapi
192.168.247.130 : k8s-nodeBbash
注:安裝lsb_release命令:yum install redhat-lsb -y服務器
三臺機器配置相同網絡
安裝好相同版本的docker:ui
[root@localhost ~]# docker -v
Docker version 1.12.6, build 85d7426/1.12.6atom
三臺機器上分別修改hostname:.net
master上運行:rest
[root@localhost ~]# hostnamectl --static set-hostname k8s-master
nodeA上運行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-nodeA
nodeB上運行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-nodeB
三臺機器上分別配置hosts, 執行以下命令修改hosts文件:
echo '192.168.247.128 k8s-master 192.168.247.128 etcd 192.168.247.128 registry 192.168.247.129 k8s-nodeA 192.168.247.130 k8s-nodeB' >> /etc/hosts
關閉三臺機器的防火牆, 三臺機器上分別執行:
[root@localhost ~]# systemctl stop firewalld
注:防火牆相關命令:
查看防火牆狀態: systemctl status firewalld
關閉防火牆:systemctl stop firewalld
開啓防火牆:systemctl start firewalld
關閉前:
關閉後:
2、安裝etcd
k8s運行依賴etcd,須要先安裝etcd, yum方式安裝etcd:
在k8s-master上運行:
yum install etcd -y
安裝完成後編輯配置文件 , yum安裝的etcd默認配置文件在/etc/etcd/etcd.conf
修改以下三個參數值:
執行以下命令,啓動etcd, 並驗證狀態是否正確 :
[root@localhost ~]# systemctl start etcd
[root@localhost ~]# etcdctl set developer xiejunbo
xiejunbo
[root@localhost ~]# etcdctl get developer
xiejunbo
[root@localhost ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[root@localhost ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
說明ectd狀態健康,能夠正常使用。
3、部署k8s-master
安裝docker:
yum install docker
修改docker配置文件:vi /etc/sysconfig/docker
設置docker開機自啓動,而後開啓docker服務:
[root@localhost ~]# chkconfig docker on
[root@localhost ~]# service docker start
安裝kubernetes :
使用yum方式安裝kubernetes: yum install kubernetes
kubernetes安裝成功後, 配置並啓動kubernetes :
在kubernetes master 上運行須要如下組件:
1.kubernetes api server
2.kubernetes controller manager
3.kubernetes scheduler
須要修改相對應的配置文件 :
/etc/kubernetes/apiserver: 修改四個參數:
/etc/kubernetes/config: 修改一個參數:
修改完成後,啓動服務,而後設置開機自啓動:
[root@localhost ~]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@localhost ~]# systemctl start kube-apiserver.service
[root@localhost ~]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@localhost ~]# systemctl start kube-controller-manager.service
[root@localhost ~]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@localhost ~]# systemctl start kube-scheduler.service
4、部署k8s-node
1.安裝docker略
2.nodeA節點安裝kubernetes: yum install kubernetes
配置並啓動kubernetes:
在k8s-node上須要運行如下組件:
1.kubelet
2.kubernetes proxy
須要對應修改兩個配置文件 :
修改/etc/kubernetes/config中的kube_master地址參數:
修改/etc/kubernetes/kubelet中的三個參數:
修改完成後,啓動服務並設置開機自動啓動:
[root@localhost ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@localhost ~]# systemctl start kubelet.service
[root@localhost ~]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@localhost ~]# systemctl start kube-proxy.service
節點啓動後,在master上查看狀態是否正常:
[root@localhost ~]# kubectl -s http://k8s-master:8080 get node
NAME STATUS AGE
k8s-nodea Ready 2m
[root@localhost ~]# kubectl get nodes
NAME STATUS AGE
k8s-nodea Ready 7m
在節點k8s-nodeB上按nodeA操做,一樣安裝kubernetes:
安裝kubernetes 成功後, 按k8s-nodeA修改配置:
修改完配置後,啓動服務並設置開機自啓動:
[root@localhost ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@localhost ~]# systemctl start kubelet.service
[root@localhost ~]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@localhost ~]# systemctl start kube-proxy.service
在master上查看集羣中節點及節點狀態:
5、建立Flannel網絡
在k8s-master、k8s-nodeA、k8s-nodeB上均安裝Flannel, 執行命令:
yum install flannel
安裝成功後,k8s-master、k8s-nodeA、k8s-nodeB上均修改配置文件爲 :/etc/sysconfig/flanneld
k8s-master中配置etcd中關於flannel的key:
[root@localhost ~]# etcdctl mk /atomic.io/network/config '{"Network":"192.0.0.0/16"}'
{"Network":"192.0.0.0/16"}
特別注意:Flannel使用Etcd進行配置,來保證多個Flannel實例之間的配置一致性,因此須要在etcd上進行以下配置:(‘/atomic.io/network/config’這個key與上文/etc/sysconfig/flannel中的配置項FLANNEL_ETCD_PREFIX是相對應的,錯誤的話啓動就會出錯)
啓動Flannel以後,須要依次重啓docker、kubernete:
在master執行:
systemctl enable flanneld.service systemctl start flanneld.service service docker restart systemctl restart kube-apiserver.service systemctl restart kube-controller-manager.service systemctl restart kube-scheduler.service
在node上執行:
systemctl enable flanneld.service systemctl start flanneld.service service docker restart systemctl restart kubelet.service systemctl restart kube-proxy.service
安裝配置完成
===========================================================
檢查K8S版本:
Congratulation ! K8S集羣環境搭建完成!能夠開擼了~
============================================================