本文來自個人github pages博客http://galengao.github.io/ 即www.gaohuirong.cnhtml
摘要:node
系統時centos7上 關閉防火牆 systemctl stop firewalld.service 關閉selinux vi /etc/selinux/comfiglinux
Kubernetes 集羣中主要存在兩種類型的節點,分別是 master 節點,以及 minion 節點。
Minion 節點是實際運行 Docker 容器的節點,負責和節點上運行的 Docker 進行交互,而且提供了代理功能。
Master 節點負責對外提供一系列管理集羣的 API 接口,而且經過和 Minion 節點交互來實現對集羣的操做管理。git
apiserver:用戶和 kubernetes 集羣交互的入口,封裝了核心對象的增刪改查操做,提供了 RESTFul 風格的 API 接口,經過 etcd 來實現持久化並維護對象的一致性。github
scheduler:負責集羣資源的調度和管理,例如當有 pod 異常退出須要從新分配機器時,scheduler 經過必定的調度算法從而找到最合適的節點。算法
controller-manager:主要是用於保證 replicationController 定義的複製數量和實際運行的 pod 數量一致,另外還保證了從 service 到 pod 的映射關係老是最新的。docker
kubelet:運行在 minion 節點,負責和節點上的 Docker 交互,例如啓停容器,監控運行狀態等。數據庫
proxy:運行在 minion 節點,負責爲 pod 提供代理功能,會按期從 etcd 獲取 service 信息,並根據 service 信息經過修改 iptables 來實現流量轉發(最初的版本是直接經過程序提供轉發功能,效率較低。),將流量轉發到要訪問的 pod 所在的節點上去。centos
etcd:key-value鍵值存儲數據庫,用來存儲kubernetes的信息的。api
flannel:Flannel 是 CoreOS 團隊針對 Kubernetes 設計的一個覆蓋網絡(Overlay Network)工具,須要另外下載部署。咱們知道當咱們啓動 Docker 後會有一個用於和容器進行交互的 IP 地址,若是不去管理的話可能這個 IP 地址在各個機器上是同樣的,而且僅限於在本機上進行通訊,沒法訪問到其餘機器上的 Docker 容器。Flannel 的目的就是爲集羣中的全部節點從新規劃 IP 地址的使用規則,從而使得不一樣節點上的容器可以得到同屬一個內網且不重複的 IP 地址,並讓屬於不一樣節點上的容器可以直接經過內網 IP 通訊。
這裏我用3臺服務器搭建一個簡單的集羣:
192.168.10.147 # master節點(etcd,kubernetes-master)
192.168.10.148 # node節點(etcd,kubernetes-node,docker,flannel)
192.168.10.149 # node節點(etcd,kubernetes-node,docker,flannel)
因爲kubernetes的進程較多,每一個節點上的進程如圖:
安裝方式參照個人另外一篇文章docker安裝
yum update tee /etc/yum.repos.d/docker.repo <<EOF [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF yum install docker-engine
yum install kubernetes-master etcd -y
yum install kubernetes-node etcd flannel -y
在master節點上編輯etcd配置文件
vi /etc/etcd/etcd.conf # [member] ETCD_NAME=etcd1 ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="http://192.168.10.147:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.10.147:2379,http://127.0.0.1:2379" CD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.147:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd1=http://192.168.10.147:2380,etcd2=http://192.168.10.148:2380,etcd3=http://192.168.10.149:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.147:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS=""
在node1上編輯配置文件
vi /etc/etcd/etcd.conf # [member] ETCD_NAME=etcd2 ETCD_DATA_DIR="/var/lib/etcd/etcd2" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="http://192.168.10.148:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.10.148:2379,http://127.0.0.1:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.148:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd1=http://192.168.10.147:2380,etcd2=http://192.168.10.148:2380,etcd3=http://192.168.10.149:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.148:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS=""
在node2上編輯配置文件
vi /etc/etcd/etcd.conf # [member] ETCD_NAME=etcd3 ETCD_DATA_DIR="/var/lib/etcd/etcd3" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="http://192.168.10.149:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.10.149:2379,http://127.0.0.1:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.149:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd1=http://192.168.10.147:2380,etcd2=http://192.168.10.148:2380,etcd3=http://192.168.10.149:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.149:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS=""
針對幾個URLS作下簡單的解釋:
[member]
ETCD_NAME :ETCD的節點名
ETCD_DATA_DIR:ETCD的數據存儲目錄
ETCD_SNAPSHOT_COUNTER:多少次的事務提交將觸發一次快照
ETCD_HEARTBEAT_INTERVAL:ETCD節點之間心跳傳輸的間隔,單位毫秒
ETCD_ELECTION_TIMEOUT:該節點參與選舉的最大超時時間,單位毫秒
ETCD_LISTEN_PEER_URLS:該節點與其餘節點通訊時所監聽的地址列表,多個地址使用逗號隔開,其格式能夠劃分爲scheme://IP:PORT,這裏的scheme能夠是http、https
ETCD_LISTEN_CLIENT_URLS:該節點與客戶端通訊時監聽的地址列表
[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS:該成員節點在整個集羣中的通訊地址列表,這個地址用來傳輸集羣數據的地址。所以這個地址必須是能夠鏈接集羣中全部的成員的。
ETCD_INITIAL_CLUSTER:配置集羣內部全部成員地址,其格式爲:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,若是有多個使用逗號隔開
ETCD_ADVERTISE_CLIENT_URLS:廣播給集羣中其餘成員本身的客戶端地址列表
至此etcd集羣就部署完了,而後每一個節點上啓動
systemctl start kube-apiserver
驗證:
[root@k8s1 ~]# etcdctl cluster-health member 35300bfb5308e02c is healthy: got healthy result from http://192.168.10.147:2379 member 776c306b60e6f972 is healthy: got healthy result from http://192.168.10.149:2379 member a40f86f061be3fbe is healthy: got healthy result from http://192.168.10.148:2379
修改apiserver配置文件
[root@k8s1 ~]# vi /etc/kubernetes/apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. # KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1" KUBE_API_ADDRESS="--address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port minions listen on KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.10.147:2379,http://192.168.10.148:2379,http://192.168.10.149:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS=""
配置controller-manager 暫時不作修改
[root@k8s1 etcd]# vi /etc/kubernetes/controller-manager ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS=""
啓動Master上的三個服務
systemctl start kube-apiserver systemctl start kube-controller-manager systemctl start kube-scheduler systemctl enable kube-apiserver systemctl enable kube-controller-manager systemctl enable kube-scheduler
修改節點config配置文件
[root@k8s1 ~]# vi /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://192.168.10.147:8080" ~
修改kubelet配置
[root@k8s1 ~]# vi /etc/kubernetes/kubelet ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=127.0.0.1" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.10.148" # location of the api-server KUBELET_API_SERVER="--api-servers=http://192.168.10.147:8080" # pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own! KUBELET_ARGS=""
分別啓動kubernetes node服務
systemctl start kubelet systemctl start kube-proxy systemctl enable kubelet systemctl enable kube-proxy
這裏網絡部分是以插件的形式配置在kubernetes集羣中,這裏選用flannel。
上述步驟已經在node上安裝
yum install flannel -y
[root@k8s1 ~]# vi /etc/sysconfig/flanneld FLANNEL_ETCD_KEY="/atomic.io # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://192.168.10.147:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/coreos.com/network" # Any additional options that you want to pass #FLANNEL_OPTIONS=""
# 只在master上etcd執行 etcdctl mk /coreos.com/network/config '{"Network": "10.1.0.0/16"}' # 若要從新建,先刪除 etcdctl rm /coreos.com/network/ --recursive
重置docker0網橋的配置
刪除docker啓動時默認建立的docker0網橋,flannel啓動時會獲取到一個網絡地址,而且配置docker0的IP地址,做爲該網絡的網關地址,若是此時docker0上配置有IP地址,那麼flannel將會啓動失敗。ip link del docker0
在master上執行下面,檢查kubernetes的狀態
[root@k8s1 ~]# kubectl get nodes NAME STATUS AGE 192.168.10.148 Ready 3h 192.168.10.149 Ready 3h
在master上執行下面,檢查etcd的狀態
[root@k8s1 ~]# etcdctl member list 35300bfb5308e02c: name=etcd1 peerURLs=http://192.168.10.147:2380 clientURLs=http://192.168.10.147:2379 776c306b60e6f972: name=etcd3 peerURLs=http://192.168.10.149:2380 clientURLs=http://192.168.10.149:2379 a40f86f061be3fbe: name=etcd2 peerURLs=http://192.168.10.148:2380 clientURLs=http://192.168.10.148:2379
centos7查看日誌命令: journalctl -xe 或者 systemctl status flanneld.service flanneld對應改爲你的項目