docker集羣運行在calico網絡上

##網絡及版本信息node

docker1 centos7 192.168.75.200linux

docker2 centos7 192.168.75.201git

物理網絡 192.168.75.1/24github

Docker version 1.10.3, build 3999ccb-unsupported ,安裝過程略docker

# calicoctl versioncentos

Version:      v1.0.0-12-g0d6d228
Build date:   2017-01-17T09:01:03+0000
Git commit:   0d6d228

##1.安裝etcdapi

####下載安裝etcdbash

# ETCD_VER=v3.0.16網絡

# DOWNLOAD_URL=https://github.com/coreos/etcd/releases/downloadcurl

# curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz

# mkdir -p /tmp/test-etcd && tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/test-etcd --strip-components=1

# cd /tmp/test-etcd && cp etcd* /usr/local/bin/

啓動etcd

# etcd --listen-client-urls 'http://192.168.75.200:2379' --advertise-client-urls 'http://192.168.75.200:2379'

查看etcd信息

# etcdctl --endpoint 'http://192.168.75.200:2379' member list

8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.75.200:2379 isLeader=true

##2.下載安裝calico

修改網絡內核參數

# sysctl -w net.netfilter.nf_conntrack_max=1000000 # echo "net.netfilter.nf_conntrack_max=1000000" >> /etc/sysctl.conf

下載calicoctl

# cd /usr/local/bin/ && wget http://www.projectcalico.org/builds/calicoctl

# chmod 755 calicoctl

設置etcd環境變量

# export ETCD_ENDPOINTS=http://192.168.75.200:2379 && echo "export ETCD_ENDPOINTS=http://192.168.75.200:2379" >>/etc/profile

安裝運行calico node

# calicoctl node run

Running command to load modules: modprobe -a xt_set ip6_tables
Enabling IPv4 forwarding
Enabling IPv6 forwarding
Increasing conntrack limit
Removing old calico-node container (if running).
Running the following command to start calico-node:

docker run --net=host --privileged --name=calico-node -d --restart=always -e ETCD_AUTHORITY= -e ETCD_SCHEME= -e NODENAME=docker1 -e CALICO_NETWORKING_BACKEND=bird -e NO_DEFAULT_POOLS= -e CALICO_LIBNETWORK_ENABLED=true -e CALICO_LIBNETWORK_IFPREFIX=cali -e ETCD_ENDPOINTS=http://192.168.75.200:2379 -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /var/log/calico:/var/log/calico calico/node:latest

Image may take a short time to download if it is not available locally.
Container started, checking progress logs.
Waiting for etcd connection...
Using auto-detected IPv4 address: 192.168.75.200
No IPv6 address configured
Using global AS number
Calico node name:  docker1
CALICO_LIBNETWORK_ENABLED is true - start libnetwork service
Calico node started successfully

在docker1查看calico node狀態,發現與docker2(192.168.75.201)鏈接已創建

# calicoctl node status

Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.75.201 | node-to-node mesh | up    | 01:57:54 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

##3.配置calico pool

查看默認pool

# calicoctl get pool

CIDR                       
192.168.0.0/16             
fd80:24e2:f998:72d6::/64

刪除默認pool,在任意一臺node上操做

# calicoctl delete pool 192.168.0.0/16

Successfully deleted 1 'ipPool' resource(s)

# calicoctl delete pool fd80:24e2:f998:72d6::/64

Successfully deleted 1 'ipPool' resource(s)

建立新的ipPool,在任意一臺node上操做

# vi /etc/calico/ippool_10.1.0.0_16.cfg

apiVersion: v1
kind: ipPool
metadata:
  cidr: 10.1.0.0/16
spec:
  ipip:
    enabled: true
  nat-outgoing: true
  disabled: false

# calicoctl create -f /etc/calico/ippool_10.1.0.0_16.cfg

Successfully created 1 'ipPool' resource(s)

##4.配置docker,建立docker network

修改集羣中每臺docker啓動參數,重啓docker

添加--cluster-store=etcd://192.168.75.200:2379/calico 指定docker集羣使用的存儲,不然下一步不會成功建立network

# vi /etc/sysconfig/docker

OPTIONS='--selinux-enabled --log-driver=journald --cluster-store=etcd://192.168.75.200:2379/calico'

集羣中任意一臺上docker建立網絡

# docker network create --driver=calico --ipam-driver=calico-ipam net1

0501f1b788756d122568e7aed2d7c56fe2de9138f9bd00f6628c4b66c81c7c9b

# docker network create --driver=calico --ipam-driver=calico-ipam net2

4b636bf63b23dee13b817c911335823a84ad6d55771a44e89fb81c16f76663ad

# docker network ls

NETWORK ID          NAME                DRIVER
54a450c39848        net1                calico              
8fdcdecdb0bc        net2                calico              
e0d1a688fef8        none                null                
0e987140865a        host                host                
b5122ac5e20e        bridge              bridge

##5.測試網絡連否連通

docker1啓動net1,net2各一個container

[root@docker1 bin]# docker run -itd --net=net1 --name=testnet1 centos /bin/bash
579c509e293e25340f10cc188a91136f99ed9021b99f795a9056a683b6b46864
[root@docker1 bin]# docker run -itd --net=net2 --name=testnet2 centos /bin/bash
c8777a2ff6add64e6abf454828820a6cfee332086a58c769a6cf1e5e0fda8760

docker2啓動net1,net2各一個container

[root@docker2 bin]# docker run -itd --net=net1 --name=testnet3 centos /bin/bash
8bb7be8d86a04631a442a9f43e6be9576a891f704b91042550c5fe632fa11f06
[root@docker2 bin]# docker run -itd --net=net2 --name=testnet4 centos /bin/bash
422f4466db503b380f646d6eaee14a2f695550669fd4987fadefff438f456a36

container ip信息以下

testnet1 10.1.174.193
testnet2 10.1.174.194
testnet3 10.1.166.129
testnet4 10.1.166.130

####testnet1上ping其餘container

testnet1容器只和docker2上的testnet3容器能通,由於兩個container都屬於net1網絡

[root@579c509e293e /]# ping 10.1.166.129
PING 10.1.166.129 (10.1.166.129) 56(84) bytes of data.
64 bytes from 10.1.166.129: icmp_seq=1 ttl=62 time=0.400 ms
^C
--- 10.1.166.129 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms
[root@579c509e293e /]# ping 10.1.166.130
PING 10.1.166.130 (10.1.166.130) 56(84) bytes of data.
^C
--- 10.1.166.130 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3000ms

[root@579c509e293e /]# ping 10.1.174.194
PING 10.1.174.194 (10.1.174.194) 56(84) bytes of data.
^C
--- 10.1.174.194 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2000ms

遇到的問題:

1.docker異常後沒法restart testnet3,4容器

docker: Error response from daemon: service endpoint with name testnet3 already exists.

解決方案:

etcd中endpoint信息未刪除,手動刪除吧,查找方法以下

54a450.....是network id,可經過docker network ls查找到

遍歷下/calico/docker/network/v1.0/endpoint/54a450c3984853b3942738163cfaaa7dd247686ccc10b8f395dfb807df11e2bb/的全部數據就能找到對應的數據手工刪除

# etcdctl --endpoint 'http://192.168.75.200:2379' get /calico/docker/network/v1.0/endpoint/54a450c3984853b3942738163cfaaa7dd247686ccc10b8f395dfb807df11e2bb/5d9cad95e7193e47177eb6d8bdfa25ebc878d8565c48227861^Cf6700136a10c

{"anonymous":false,"disableResolution":false,"ep_iface":{"addr":"10.1.174.198/32","dstPrefix":"cali","mac":"ee:ee:ee:ee:ee:ee","routes":["169.254.1.1/32"],"srcName":"temp5d9cad95e71","v4PoolID":"CalicoPoolIPv4","v6PoolID":""},"exposed_ports":[],"generic":{"com.docker.network.endpoint.exposedports":[],"com.docker.network.portmap":[]},"id":"5d9cad95e7193e47177eb6d8bdfa25ebc878d8565c48227861f6f6700136a10c","locator":"","myAliases":null,"name":"testnet1","sandbox":"bc9abf7c29a9532500aeb9618b22254eab9e73aecc9d4b6c3bf488b6d173791e"}

2.node訪問其餘node上的container不通

默認net1和net2的profile是容許tag相同的訪問endpoint,可是calico node默認沒法訪問,須要修改profile

# calicoctl get profile net1 -o yaml > /etc/calico/profile_net1.yaml

# vi /etc/calico/profile_net1.yaml

- apiVersion: v1
  kind: profile
  metadata:
    name: net1
    tags:
    - net1
  spec:
    egress:
    - action: allow
      destination: {}
      source: {}
    ingress:
    - action: allow
      destination: {}
      source:
        tag: net1
#下面是新加的rule
    - action: allow
      destination: {}
      source:
        net: 192.168.75.0/24
    - action: allow
      destination: {}
      source:
        net: 10.1.174.192/32
    - action: allow
      destination: {}
      source:
        net: 10.1.166.128/32

# calicoctl create -f /etc/calico/profile_net1.yaml

Successfully created 1 'policy' resource(s)

10.1.174.192/32和10.1.166.128/32是docker1和docker2的tunl0的ip,手工配置這個仍是比較繁瑣,應該寫成腳本作這個工做

再在集羣中任意一臺node上ping另一臺node上隨便一臺net1下的container都能ping通了

相關文章
相關標籤/搜索