截止2015年9月1日,CentOS 已經把 Kubernetes 加入官方源,因此如今安裝Kubernetes已經方便不少。 node
各組件版本以下: linux
Kubernetes-1.03 docker-1.8.2 flannel-0.5.3 etcd-2.1.1Kubernetes部署環境角色以下:
CentOS 7.2 64位系統,3臺虛擬機: master:192.168.32.15 minion1:192.168.32.16 minion2:192.168.32.17
1. 預處理 docker
每臺機器禁用iptables 避免和docker 的iptables衝突: shell
systemctl stop firewalld systemctl disable firewalld禁用selinux:
vim /etc/selinux/config #SELINUX=enforcing SELINUX=disabled在2個minions機器安裝docker:
yum -y install docker yum -y update reboot
CentOS系統,使用devicemapper做爲存儲後端,初始安裝docker 會使用loopback, 致使docker啓動報錯,須要update以後再啓動。 vim
Docker啓動腳本更新 後端
vim /etc/sysconfig/docker
添加:-H tcp://0.0.0.0:2375,最終配置以下,以便之後提供遠程API維護: api
OPTIONS=--selinux-enabled -H tcp://0.0.0.0:2375 -H fd://
提早說明一下,kubernetes運行pods時須要連帶運行一個叫pause的鏡像,須要先從docker.io上下載此鏡像,而後用docker命令更名字: 網絡
docker pull docker.io/kubernetes/pause docker tag kubernetes/pause gcr.io/google_containers/pause:0.8.0 docker tag gcr.io/google_containers/pause:0.8.0 gcr.io/google_containers/pause
2. master結點的安裝與配置 app
安裝etcd與kubernetes-master: tcp
yum -y install etcd kubernetes-master修改etcd配置文件:
# egrep -v 「^#」 /etc/etcd/etcd.conf ETCD_NAME=default ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.32.15:2379"修改kube-master配置文件:
# egrep -v ‘^#’ /etc/kubernetes/apiserver | grep -v ‘^$’ KUBE_API_ADDRESS="--address=0.0.0.0" KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.32.15:2379" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" KUBE_API_ARGS=""
# egrep -v ‘^#’ /etc/kubernetes/controller-manager |grep -v ‘^$’ KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"
[root@localhost ~]# egrep -v ‘^#’ /etc/kubernetes/config | egrep -v ‘^$’ KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow_privileged=false" KUBE_MASTER="--master=http://192.168.32.15:8080"啓動服務:
systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager
定義flannel網絡配置到etcd,這個配置會推送到各個minions的flannel服務上:
etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'3. minion結點的安裝與配置
yum -y install kubernetes-node flannel修改kube-node和flannel配置文件:
# egrep -v ‘^#’ /etc/kubernetes/config | grep -v ‘^$’ KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow_privileged=false" KUBE_MASTER="--master=http://192.168.32.15:8080"
# egrep -v ‘^#’ /etc/kubernetes/kubelet | grep -v ‘^$’ KUBELET_ADDRESS="--address=127.0.0.1" KUBELET_HOSTNAME="--hostname_override=192.168.32.16" KUBELET_API_SERVER="--api_servers=http://192.168.32.15:8080" KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"爲etcd服務配置flannel,修改配置文件 /etc/sysconfig/flanneld:
FLANNEL_ETCD="http://192.168.32.15:2379" FLANNEL_ETCD_KEY="/coreos.com/network"啓動服務:
systemctl enable flanenld kubelet kube-proxy systemctl restart flanneld docker systemctl start kubelet kube-proxy在每一個minions能夠看到2塊網卡:docker0和flannel0,這2塊網卡的ip在不一樣的機器ip地址不一樣:
#minion1 4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500 link/none inet 172.17.98.0/16 scope global flannel0 valid_lft forever preferred_lft forever 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:9a:01:ca:99 brd ff:ff:ff:ff:ff:ff inet 172.17.98.1/24 scope global docker0 valid_lft forever preferred_lft forever #minion2 4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500 link/none inet 172.17.67.0/16 scope global flannel0 valid_lft forever preferred_lft forever 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:25:be:ba:64 brd ff:ff:ff:ff:ff:ff inet 172.17.67.1/24 scope global docker0 valid_lft forever preferred_lft forever4. 檢查狀態
登錄master,確認minions的狀態:
[root@master ~]# kubectl get nodes NAME LABELS STATUS 192.168.32.16 kubernetes.io/hostname=192.168.32.16 Ready 192.168.32.17 kubernetes.io/hostname=192.168.32.17 Readykubernetes的集羣就配置完成,下面就是搞pod了,後續會繼續試驗。