Centos下Kubernetes+Flannel部署(新)

1、準備工做

1) 三臺centos主機

k8s master: 10.11.151.97  tc-151-97html

k8s node1: 10.11.151.100  tc-151-100node

k8s node2: 10.11.151.101  tc-151-101linux

2)程序下載(百度網盤)

k8s-1.1.3Docker-1.8.2ETCD-2.2.1Flannel-0.5.5git

2、ETCD集羣部署

ETCD是k8s集羣的基礎,能夠單結點也能夠以集羣的方式部署。本文以三臺主機組成ETCD集羣進行部署,以service形式啓動。在三臺主機上分別執行以下操做:github

1)解壓ETCD安裝包並將etcd和etcdctl複製到工做目錄下(本文工做目錄爲/opt/domeos/openxxs/k8s-1.1.3-flannel)。docker

2)建立 /lib/systemd/system/etcd.service 文件,該文件爲centos系統的服務文件,注意配置其中的etcd可執行文件的絕對路徑:centos

[Unit]
Description=ETCD

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/etcd
ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/etcd $ETCD_NAME \
          $INITIAL_ADVERTISE_PEER_URLS \
          $LISTEN_PEER_URLS \
          $ADVERTISE_CLIENT_URLS \
          $LISTEN_CLIENT_URLS \
          $INITIAL_CLUSTER_TOKEN \
          $INITIAL_CLUSTER \
          $INITIAL_CLUSTER_STATE \
          $ETCD_OPTS
Restart=on-failure

3)建立 /etc/sysconfig/etcd 文件,該文件爲服務的配置文件,三臺主機的ETCD_NAME、INITIAL_ADVERTISE_PEER_URLS和ADVERTISE_CLIENT_URLS參數各不相同,下面爲97機上的配置文件,100和101上要作相應修改:api

# configure file for etcd

# -name
ETCD_NAME='-name k8sETCD0'
# -initial-advertise-peer-urls
INITIAL_ADVERTISE_PEER_URLS='-initial-advertise-peer-urls http://10.11.151.97:4010'
# -listen-peer-urls
LISTEN_PEER_URLS='-listen-peer-urls http://0.0.0.0:4010'
# -advertise-client-urls
ADVERTISE_CLIENT_URLS='-advertise-client-urls http://10.11.151.97:4011,http://10.11.151.97:4012'
# -listen-client-urls
LISTEN_CLIENT_URLS='-listen-client-urls http://0.0.0.0:4011,http://0.0.0.0:4012'
# -initial-cluster-token
INITIAL_CLUSTER_TOKEN='-initial-cluster-token k8s-etcd-cluster'
# -initial-cluster
INITIAL_CLUSTER='-initial-cluster k8sETCD0=http://10.11.151.97:4010,k8sETCD1=http://10.11.151.100:4010,k8sETCD2=http://10.11.151.101:4010'
# -initial-cluster-state
INITIAL_CLUSTER_STATE='-initial-cluster-state new'
# other parameters
ETCD_OPTS=''

4)啓動ETCD集羣安全

systemctl daemon-reload
systemctl start etcd

三臺主機上都執行完畢後,可經過以下命令確認ETCD集羣是否正常工做了(以97機爲例):網絡

# 查看服務狀態
systemctl status -l etcd
# 若正常,則顯示 Active: active (running),同時在日誌的最後會提示當前結點已加入到集羣中了,如 "the connection with 6adad1923d90fb38 became active"
# 若是各個ETCD結點間系統時間相差較大則會提示"the clock difference against ... peer is too high",此時根據須要修正系統時間
# 查看集羣結點的訪問是否正常 curl -L http://10.11.151.97:4012/version curl -L http://10.11.151.100:4012/version curl -L http://10.11.151.101:4012/version # 若正常,則返回: {"etcdserver":"2.2.1","etcdcluster":"2.2.0"}

3、配置網絡環境

啓動集羣前若是網絡環境配置存在衝突,特別是iptables規則的干涉,會致使集羣工做不正常。所以在啓動前須要確認以下配置:

1)/etc/hosts

kubelet 是經過/etc/hosts來獲取本機IP的,所以須要在/etc/hosts中配置hostname和IP的對應關係,如97機上的 /etc/hosts 中須要存在這條記錄:

10.11.151.97   tc-151-97

hostname在k8s的網絡配置中是個很重要的參數,要求其知足DNS的命名規則,可由字母數字短橫線組成,但下劃線不行(如tc_151_97就是不符合要求的)。在主機上經過執行 hostname 命令查看本機的hostname,若是不符合要求,有兩種解決方案:<1>直接更改主機的hostname使其符合要求,更改過程當中須要重啓網絡,這裏寫了一個centos下更改hostname的腳本(百度網盤,戳這裏);<2>在啓動kubelet時使用 --hostname_override 參數指定用於集羣內的hostname,在已有其它服務依賴於主機hostname的情形下推薦使用這種方式。

2)iptables

flannel經過修改iptables規則來達到託管docker網絡的目的,所以在啓動前須要對iptables進行清理確保不存在衝突。若是iptables中並無很重要的規則,建議直接清空:

# 瞄一眼現有的iptables規則
iptables -L -n

# 若是沒有重要規則,執行清空
iptables -P INPUT ACCEPT
iptables -F

# 再瞄一眼看是否是已經清空了
iptables -L -n

一不做二不休關閉防火牆服務(flannel啓動時會自動啓用iptables):

systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

3)ifconfig

若是在主機上進行了屢次k8s的配置,則須要對網卡進行清理。未啓動flanneld和docker服務的情形下,經過 ifconfig 查看網卡,若是存在docker0、flannel.0或flannel.1,以及calico網絡(準備寫《Centos下kubernetes+calico部署》,到時會詳細說明)設置產生的虛擬網卡,則使用以下命令進行刪除:

ip link delete docker0
ip link delete flannel.1
......

4)flannel參數設置

集羣中flannel的可用子網段和網絡包封裝方式等配置信息須要提早寫入ETCD中:

curl -L http://10.11.151.97:4012/v2/keys/flannel/network/config -XPUT -d value="{\"Network\":\"172.16.0.0/16\",\"SubnetLen\":25,\"Backend\":{\"Type\":\"vxlan\",\"VNI\":1}}"

寫入ETCD中的key爲 /flannel/network/config ,後面配置flannel服務時須要用到。配置項中的 Network 爲整個k8s集羣可用的子網段;SubnetLen爲每一個Node結點的子網掩碼長度;Type表示封包的方式,推薦使用vxlan,此外還有udp等方式。

4、啓動k8s-master端

 k8s-master通常包括三個組件:kube-apiserver、kube-controller-manager 和 kube-scheduler。若是要將k8s-master所在的主機也加入集羣管理中,好比讓這臺主機可使用集羣內的DNS服務等,則須要在這臺主機上啓動kube-proxy,本文不考慮這種狀況。將安裝包解壓後,複製 解壓目錄/bin/linux/amd64/ 下的 kube-apiserver、kube-controller-manager 和 kube-scheduler 到工做目錄中。

1)建立、配置和啓動kube-apiserver服務

<1> /lib/systemd/system/kube-apiserver.service 文件,一樣須要注意將kube-apiserver可執行文件的絕對路徑配置一下:

[Unit]
Description=kube-apiserver

[Service]
EnvironmentFile=/etc/sysconfig/kube-apiserver
ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kube-apiserver $ETCD_SERVERS \
$LOG_DIR \ $SERVICE_CLUSTER_IP_RANGE \ $INSECURE_BIND_ADDRESS \ $INSECURE_PORT \ $BIND_ADDRESS \ $SECURE_PORT \ $AUTHORIZATION_MODE \ $AUTHORIZATION_FILE \ $BASIC_AUTH_FILE \ $KUBE_APISERVER_OPTS Restart
=on-failure

<2> /etc/sysconfig/kube-apiserver 文件:

# configure file for kube-apiserver

# --etcd-servers
ETCD_SERVERS='--etcd-servers=http://10.11.151.97:4012,http://10.11.151.100:4012,http://10.11.151.101:4012'
# --log-dir
LOG_DIR='/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
#
--service-cluster-ip-range SERVICE_CLUSTER_IP_RANGE='--service-cluster-ip-range=172.16.0.0/16' # --insecure-bind-address INSECURE_BIND_ADDRESS='--insecure-bind-address=0.0.0.0' # --insecure-port INSECURE_PORT='--insecure-port=8080' # --bind-address BIND_ADDRESS='--bind-address=0.0.0.0' # --secure-port SECURE_PORT='--secure-port=6443' # --authorization-mode AUTHORIZATION_MODE='--authorization-mode=ABAC' # --authorization-policy-file AUTHORIZATION_FILE='--authorization-policy-file=/opt/domeos/openxxs/k8s-1.1.3-flannel/authorization' # --basic-auth-file BASIC_AUTH_FILE='--basic-auth-file=/opt/domeos/openxxs/k8s-1.1.3-flannel/authentication.csv' # other parameters KUBE_APISERVER_OPTS=''

若是不須要使用 https 進行認證和受權,則能夠不配置BIND_ADDRESS、SECURE_PORT、AUTHORIZATION_MODE、AUTHORIZATION_FILE和BASIC_AUTH_FILE。關於安全認證和受權在k8s官方文檔裏給出了很詳細的介紹(authorization戳這裏,authentication戳這裏),本文的配置方式以ABAC(用戶配置認證策略)進行認證,同時明文存儲了密碼。兩個配置文件的內容以下:

# /opt/domeos/openxxs/k8s-1.1.3-flannel/authorization的內容:
{"user": "admin"}

# /opt/domeos/openxxs/k8s-1.1.3-flannel/authentication.csv的內容,共三列(密碼,用戶名,用戶ID):
admin,admin,adminID

事實上只要配置了ETCD_SERVERS一項其它全留空也足以讓kube-apiserver正常跑起來了。ETCD_SERVERS也並不須要將ETCD集羣的全部結點服務地址寫上,但至少要有一個。

<3> 啓動kube-apiserver

systemctl daemon-reload
systemctl start kube-apiserver
# 啓動完成後查看下服務狀態和日誌是否正常
systemctl status -l kube-apiserver

還能夠經過以下命令查看kube-apiserver是否正常,正常則返回'ok':

curl -L http://10.11.151.97:8080/healthz

2)建立、配置和啓動kube-controller-manager服務

三個組件啓動是有順序,必須等kube-apiserver正常啓動以後再啓動kube-controller-manager。

<1> /etc/sysconfig/kube-controller 文件:

# configure file for kube-controller-manager

# --master KUBE_MASTER='--master=http://10.11.151.97:8080' # --log-dir
LOG_DIR='--log-dir=/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
# --cloud-provider CLOUD_PROVIDER='--cloud-provider=' # other parameters KUBE_CONTROLLER_OPTS=''

<2> /lib/systemd/system/kube-controller.service

[Unit]
Description=kube-controller-manager After=kube-apiserver.service Wants=kube-apiserver.service [Service] EnvironmentFile=/etc/sysconfig/kube-controller ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kube-controller-manager $KUBE_MASTER \ $LOG_DIR \
$CLOUD_PROVIDER \ $KUBE_CONTROLLER_OPTS Restart=on-failure

<3> 啓動kube-controller-manager

systemctl daemon-reload
systemctl start kube-controller systemctl status -l kube-controller

如今來看看日誌報了什麼錯:

I0127 10:34:11.374094   29737 plugins.go:71] No cloud provider specified.
I0127 10:34:11.374212   29737 nodecontroller.go:133] Sending events to api server.
E0127 10:34:11.374448   29737 controllermanager.go:290] Failed to start service controller: ServiceController should not be run without a cloudprovider.
I0127 10:34:11.382191   29737 controllermanager.go:332] Starting extensions/v1beta1 apis
I0127 10:34:11.382217   29737 controllermanager.go:334] Starting horizontal pod controller.
I0127 10:34:11.382284   29737 controllermanager.go:346] Starting job controller
E0127 10:34:11.402650   29737 serviceaccounts_controller.go:215] serviceaccounts "default" already exists

第一個錯誤爲"ServiceController should not be run without a cloudprovider",表示--cloud-provider必須設置;第二個錯誤爲"serviceaccounts "default" already exists",controller但願每一個namespace都有一個service account,若是沒有,controller會嘗試建立一個名爲"default"的account,然而它在本地又是存在的。該模塊的開發者說這兩個錯誤是"harmless"的(戳這裏,再戳這裏),在後續版本中也已修復了這個bug。對於第一個錯誤,啓動命令中必須帶有--cloud-provider參數,即便它的值爲空;對於第二個錯誤,Google搜索獲得惟一的解決方案爲在啓動kube-apiserver時設置的--admission-controllers參數中移除serviceAccount這一項,試事後並無論用。當設置了具體的--cloud-provider時,不會報這兩個錯誤;而對於--cloud-provider爲空的狀況,這兩個錯誤確實是harmless的,報了錯但進程已經正常啓動了,因此並不影響kube-controller-manager的工做。

3)建立、配置和啓動kube-scheduler服務

<1> /etc/sysconfig/kube-scheduler

# configure file for kube-scheduler

# --master
KUBE_MASTER='--master=http://10.11.151.97:8080'
# --log-dir
LOG_DIR='--log-dir=/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
# other parameters
KUBE_SCHEDULER_OPTS=''

<2> /lib/systemd/system/kube-scheduler.service

[Unit]
Description=kube-scheduler
After=kube-apiserver.service
Wants=kube-apiserver.service

[Service]
EnvironmentFile=/etc/sysconfig/kube-scheduler
ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kube-scheduler $KUBE_MASTER \
          $LOG_DIR \
          $KUBE_SCHEDULER_OPTS
Restart=on-failure

<3> 啓動kube-scheduler

systemctl daemon-reload
systemctl start kube-scheduler
systemctl status -l kube-scheduler

5、啓動k8s-node端

將Docker和Flannel的 rpm 安裝包下載到工做目錄下;將k8s安裝包解壓後,複製解壓目錄/bin/linux/amd64/ 下的 kube-proxy、kubelet到工做目錄下。

這裏閹割改寫了DomeOS項目一鍵添加node結點的start_node.sh腳本(戳這裏),包括環境檢查、安裝docker、安裝flannel、啓動kubelet等等,下載start_node.sh腳本到工做目錄,而後根據須要修改STEP02中的一些配置項。以100機爲例,改完腳本後肯定各參數取值運行以下命令便可:

sudo sh start_node.sh --api-server http://10.11.151.97:8080 --iface em1 --hostname-override tc-151-100 --pod-infra 10.11.150.76:5000/kubernetes/pause:latest --cluster-dns 172.16.40.1 --cluster-domain domeos.sohu --insecure-registry 10.11.150.76:5000 --etcd-server http://10.11.151.97:4012

--api-server爲kube-apiserver的服務地址;--iface爲目前用於鏈接的網卡(以100機爲例,即IP地址爲10.11.151.100的網卡);--hostname-override爲主機名的別名;--pod-infra爲/kubernetes/pause:latest鏡像的地址;--cluster-dns爲集羣內DNS服務的地址;--cluster-domain爲DNS解析服務的域名後綴;--insecure-registry爲私有倉庫地址;--etcd-server爲用於集羣的ETCD服務地址。

下面以101機爲例來講明不使用start_node.sh腳原本配置啓動k8s-node端的過程:

1)安裝和配置docker

<1> 安裝Docker

yum install docker-engine-1.8.2-1.el7.centos.x86_64.rpm -y

<2> 修改配置文件 /etc/sysconfig/docker

DOCKER_OPTS="-g /opt/domeos/openxxs/k8s-1.1.3-flannel/docker"
INSECURE_REGISTRY="--insecure-registry 10.11.150.76:5000"

這裏設置了Docker的數據存放路徑(默認放在 /var 下面)和私有的鏡像倉庫。

<3> 修改配置文件 /lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
EnvironmentFile=/etc/sysconfig/docker
ExecStart=/usr/bin/docker daemon $DOCKER_OPTS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY

MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity

[Install]
WantedBy=multi-user.target

這裏注意,若是安裝了低版本docker或用非官方的方式安裝的docker(例如安裝的是docker-selinux-1.8.2-10.e17.centos.x86_64和docker-1.8.2.e17.centos.x86_64),頗有可能沒有docker.socket這個文件,此時須要把"After=network.target docker.socket"和"Requires=docker.socket"這兩句去除了。

2)安裝和配置flannel

<1> 安裝flannel

yum install -y flannel-0.5.5-1.fc24.x86_64.rpm

<2> 修改配置文件 /etc/sysconfig/flanneld

FLANNEL_ETCD="http://10.11.151.97:4012"
FLANNEL_ETCD_KEY="/flannel/network"
FLANNEL_OPTIONS="-iface=em1"

這裏須要特別注意,若是對機子的網卡進行了一些修改,用於鏈接外網的網卡名比較特殊(好比機子用的是萬兆網卡,網卡名即爲p6p1),啓動flannel時會報"Failed to get default interface: Unable to find default route"錯誤,則FLANNEL_OPTIONS須要添加參數:iface=<用於鏈接的網卡名>。例如100機的網卡名爲em1則 iface=em1;萬兆網卡的網卡名爲p6p1則 iface=p6p1。

<3> 修改配置文件 /lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

3)啓動Flannel

systemctl daemon-reload
systemctl start flanneld
systemctl status -l flanneld

4)啓動Docker

systemctl daemon-reload
systemctl start docker
systemctl status -l docker

啓動後查看下啓動的docker是否是被flannel託管了:

命令: ps aux | grep docker

顯示結果: /usr/bin/docke daemon -g /opt/domeos/openxxs/k8s-1.1.3-flannel/docker --bip=172.16.17.129/25 --ip-masq=true --mtu=1450 --insecure-registry 10.11.150.76:5000

能夠看到docker啓動後被加上了flanneld的相關配置項了(bip, ip-masq 和 mtu)

5)配置和啓動kube-proxy

<1> 修改配置文件 /etc/sysconfig/kube-proxy

# configure file for kube-proxy

# --master
KUBE_MASTER='--master=http://10.11.151.97:8080'
# --proxy-mode
PROXY_MODE='--proxy-mode=iptables'
# --log-dir
LOG_DIR='--log-dir=/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
# other parameters KUBE_PROXY_OPTS
=''

<2> 修改配置文件 /lib/systemd/system/kube-proxy.service

[Unit]
Description=kube-proxy

[Service]
EnvironmentFile=/etc/sysconfig/kube-proxy
ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kube-proxy $KUBE_MASTER \
          $PROXY_MODE \
$LOG_DIR \ $KUBE_PROXY_OPTS Restart
=on-failure

<3> 啓動kube-proxy

systemctl daemon-reload
systemctl start kube-proxy
systemctl status -l kube-proxy

6)配置和啓動kubelet

<1> 修改配置文件 /etc/sysconfig/kubelet

# configure file for kubelet

# --api-servers
API_SERVERS='--api-servers=http://10.11.151.97:8080'
# --address
ADDRESS='--address=0.0.0.0'
# --hostname-override
HOSTNAME_OVERRIDE=''
# --allow-privileged
ALLOW_PRIVILEGED='--allow-privileged=false'
# --pod-infra-container-image
POD_INFRA='--pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest'
# --cluster-dns
CLUSTER_DNS='--cluster-dns=172.16.40.1'
# --cluster-domain
CLUSTER_DOMAIN='--cluster-domain=domeos.sohu'
# --max-pods
MAX_PODS='--max-pods=70'
# --log-dir
LOG_DIR='--log-dir=/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
# other parameters KUBELET_OPTS
=''

這裏的 CLUSTER_DNS 和 CLUSTER_DOMAIN 兩項設置與集羣內使用的DNS相關,具體參考《在k8s中搭建可解析hostname的DNS服務》。每一個pod啓動時都要先啓動一個/kubernetes/pause:latest容器來進行一些基本的初始化工做,該鏡像默認下載地址爲 gcr.io/google_containers/pause:latest,可經過POD_INFRA參數來更改下載地址。因爲GWF的存在可能會鏈接不上該資源,因此能夠將該鏡像下載下來以後再push到本身的docker本地倉庫中,啓動 kubelet 時從本地倉庫中讀取便可。MAX_PODS參數表示一個節點最多可啓動的pod數量。

<2> 修改配置文件 /lib/systemd/system/kubelet.service

[Unit]
Description=kubelet

[Service]
EnvironmentFile=/etc/sysconfig/kubelet
ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kubelet $API_SERVERS \
          $ADDRESS \
          $HOSTNAME_OVERRIDE \
          $ALLOW_PRIVILEGED \
          $POD_INFRA \
          $CLUSTER_DNS \
          $CLUSTER_DOMAIN \
          $MAX_PODS \
$LOG_DIR \ $KUBELET_OPTS Restart
=on-failure

<3> 啓動kubelet

systemctl daemon-reload
systemctl start kubelet
systemctl status -l kubelet

6、測試

1)查看主機狀態

藉助kubectl,執行以下命令查看狀態:

命令:
./kubectl --server=10.11.151.97:8080 get nodes
返回:
NAME         LABELS                              STATUS    AGE
tc-151-100   kubernetes.io/hostname=tc-151-100   Ready     9m
tc-151-101   kubernetes.io/hostname=tc-151-101   Ready     17h
說明:
結點狀態爲Ready,說明100和101成功註冊進k8s集羣中

2)建立pod

建立test.yaml文件,內容以下:

 1 apiVersion: v1
 2 kind: ReplicationController
 3 metadata:
 4     name: test-1
 5 spec:
 6   replicas: 1
 7   template:
 8     metadata:
 9       labels:
10         app: test-1
11     spec:
12       containers:
13         - name: iperf
14           image: 10.11.150.76:5000/openxxs/iperf:1.2
15       nodeSelector:
16         kubernetes.io/hostname: tc-151-100
17 ---
18 apiVersion: v1
19 kind: ReplicationController
20 metadata:
21     name: test-2
22 spec:
23   replicas: 1
24   template:
25     metadata:
26       labels:
27         app: test-2
28     spec:
29       containers:
30         - name: iperf
31           image: 10.11.150.76:5000/openxxs/iperf:1.2
32       nodeSelector:
33         kubernetes.io/hostname: tc-151-100
34 ---
35 apiVersion: v1
36 kind: ReplicationController
37 metadata:
38     name: test-3
39 spec:
40   replicas: 1
41   template:
42     metadata:
43       labels:
44         app: test-3
45     spec:
46       containers:
47         - name: iperf
48           image: 10.11.150.76:5000/openxxs/iperf:1.2
49       nodeSelector:
50         kubernetes.io/hostname: tc-151-101
51 ---
52 apiVersion: v1
53 kind: ReplicationController
54 metadata:
55     name: test-4
56 spec:
57   replicas: 1
58   template:
59     metadata:
60       labels:
61         app: test-4
62     spec:
63       containers:
64         - name: iperf
65           image: 10.11.150.76:5000/openxxs/iperf:1.2
66       nodeSelector:
67         kubernetes.io/hostname: tc-151-101

表示在100上建立 test-1 和 test-2 兩個pod,在101上建立 test-3 和 test-4 兩個pod。注意其中的 image 等參數根據實際狀況進行修改。

經過kubectl和test.yaml建立pod:

命令:
./kubectl --server=10.11.151.97:8080 create -f test.yaml 
返回:
replicationcontroller "test-1" created
replicationcontroller "test-2" created
replicationcontroller "test-3" created
replicationcontroller "test-4" created
說明:
四個rc建立成功

命令:
./kubectl --server=10.11.151.97:8080 get pods
返回:
NAME           READY       STATUS        RESTARTS      AGE
test-1-vrt0s    1/1        Running          0          8m
test-2-uwtj7    1/1        Running          0          8m
test-3-59562    1/1        Running          0          8m
test-4-m2rqw    1/1        Running          0          8m
說明:
四個pod成功啓動狀態正常

3)結點間通信

<1> 獲取四個pod對應container的IP地址

命令:
./kubectl --server=10.11.151.97:8080 describe pod test-1-vrt0s
返回:
......
IP 172.16.42.4
...... 說明: 該命令返回pod的詳細信息,其中的IP字段即爲該pod在集羣內的IP地址,也是container的IP地址
pod名稱 container名稱 所在主機 IP地址
test-1-vrt0s

c19ff66d7cc7

10.11.151.100 172.16.42.4
test-2-uwtj7

3fa6b1f78996

10.11.151.100 172.16.42.5
test-3-59562

0cc5ffa7cce6

10.11.151.101 172.16.17.132
test-4-m3rqw

2598a2ee012e

10.11.151.101 172.16.17.133

<2> 進入各個container內部ping其它container

命令:
docker ps | grep -v pause
結果:
CONTAINER ID        IMAGE                                       COMMAND             CREATED             STATUS              PORTS               NAMES
3fa6b1f78996        10.11.150.76:5000/openxxs/iperf:1.2         "/block"            About an hour ago   Up About an hour                        k8s_iperf.a4ede594_test-2-uwtj7_default_dd1d9201-c63a-11e5-8db4-782bcb435e46_aa0327af
c19ff66d7cc7        10.11.150.76:5000/openxxs/iperf:1.2         "/block"            About an hour ago   Up About an hour                        k8s_iperf.a4ede594_test-1-vrt0s_default_dd0fdef0-c63a-11e5-8db4-782bcb435e46_89db57da

命令:
docker exec -it c19ff66d7cc7 /bin/sh
結果:
sh-4.2# ping 172.16.17.132 -c 5
PING 172.16.17.132 (172.16.17.132) 56(84) bytes of data.
64 bytes from 172.16.17.132: icmp_seq=1 ttl=62 time=0.938 ms
64 bytes from 172.16.17.132: icmp_seq=2 ttl=62 time=0.329 ms
64 bytes from 172.16.17.132: icmp_seq=3 ttl=62 time=0.329 ms
64 bytes from 172.16.17.132: icmp_seq=4 ttl=62 time=0.303 ms
64 bytes from 172.16.17.132: icmp_seq=5 ttl=62 time=0.252 ms

--- 172.16.17.132 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 0.252/0.430/0.938/0.255 ms
sh-4.2# ping 172.16.17.133 -c 5
PING 172.16.17.133 (172.16.17.133) 56(84) bytes of data.
64 bytes from 172.16.17.133: icmp_seq=1 ttl=62 time=0.619 ms
64 bytes from 172.16.17.133: icmp_seq=2 ttl=62 time=0.335 ms
64 bytes from 172.16.17.133: icmp_seq=3 ttl=62 time=0.320 ms
64 bytes from 172.16.17.133: icmp_seq=4 ttl=62 time=0.328 ms
64 bytes from 172.16.17.133: icmp_seq=5 ttl=62 time=0.323 ms

--- 172.16.17.133 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.320/0.385/0.619/0.117 ms
sh-4.2# ping 172.16.42.5 -c 5  
PING 172.16.42.5 (172.16.42.5) 56(84) bytes of data.
64 bytes from 172.16.42.5: icmp_seq=1 ttl=64 time=0.122 ms
64 bytes from 172.16.42.5: icmp_seq=2 ttl=64 time=0.050 ms
64 bytes from 172.16.42.5: icmp_seq=3 ttl=64 time=0.060 ms
64 bytes from 172.16.42.5: icmp_seq=4 ttl=64 time=0.051 ms
64 bytes from 172.16.42.5: icmp_seq=5 ttl=64 time=0.070 ms

--- 172.16.42.5 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.050/0.070/0.122/0.028 ms

以上爲test-1與其它三個pod間的通信,結果顯示均鏈接正常。同理,分別測試test二、test-三、test-4與其它pod間的通信,均能正常鏈接。配置成功。

相關文章
相關標籤/搜索