實現多機容器的通訊,首先須要保證容器的ip在多臺虛擬機中都是惟一的。以上圖爲例,在192.168.205.10中新建容器,就不能再出現容器的ip爲172.17.0.3了,由於這個ip已經在192.168.205.10中存在了。node
爲了保證容器在多機中的ip惟一,能夠使用etcd
,etcd是一個開源的、分佈式的鍵值對數據存儲系統linux
https://coreos.com/etcd/
在docker-node1上安裝etcdgit
wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz tar zxvf etcd-v3.0.12-linux-amd64.tar.gz cd etcd-v3.0.12-linux-amd64 nohup ./etcd --name docker-node1 --initial-advertise-peer-urls http://192.168.205.10:2380 --listen-peer-urls http://192.168.205.10:2380 --listen-client-urls http://192.168.205.10:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.205.10:2379 --initial-cluster-token etcd-cluster --initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 --initial-cluster-state new &
在docker-node2上安裝etcdgithub
wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz tar zxvf etcd-v3.0.12-linux-amd64.tar.gz cd etcd-v3.0.12-linux-amd64/ nohup ./etcd --name docker-node2 --initial-advertise-peer-urls http://192.168.205.11:2380 --listen-peer-urls http://192.168.205.11:2380 --listen-client-urls http://192.168.205.11:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.205.11:2379 --initial-cluster-token etcd-cluster --initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 --initial-cluster-state new &
在docker-node1或docker-node2任一主機上檢查cluster運行狀態docker
./etcdctl cluster-health member 21eca106efe4caee is healthy: got healthy result from http://192.168.205.10:2379 member 8614974c83d1cc6d is healthy: got healthy result from http://192.168.205.11:2379 cluster is healthy
使用 systemctl stop docker
命令,關閉docker服務後再使用以下方式重啓
docker-node1shell
sudo /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.10:2379 --cluster-advertise=192.168.205.10:2375 &
docker-node2json
sudo /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.11:2379 --cluster-advertise=192.168.205.11:2375 &
在docker-node1上建立一個demo的overlay network網絡
docker network create -d overlay demo
docker-node1,能夠看到demo的DRIVER是overlay,SCOPE是globaltcp
[vagrant@docker-node1 etcd-v3.0.12-linux-amd64]$ docker network ls NETWORK ID NAME DRIVER SCOPE 9bacf5670fa1 bridge bridge local 5a8cd36174a1 demo overlay global efb6975c8935 host host local 4213425c5293 my-bridge bridge local 3fa0f2e1a00b none null local
而後在docker-node2上發現會同步建立分佈式
[vagrant@docker-node2 etcd-v3.0.12-linux-amd64]$ docker network ls NETWORK ID NAME DRIVER SCOPE 1c79331058ff bridge bridge local 5a8cd36174a1 demo overlay global 148fe990ddc5 host host local 452bd665cf13 none null local
經過查看etcd的key-value,發現docker network inspect demo的內容與key-value中的內容是同樣的,也就是所謂的分佈式網絡
[ { "Name": "demo", "Id": "5a8cd36174a1b37f9146aaa99de33ff20bfa8df472e07412e36048a39fe9a0e9", "Created": "2018-07-03T09:21:08.95776633Z", "Scope": "global", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]
在docker-node1上
docker run -d --name test1 --network demo busybox /bin/sh -c "while true;do sleep 3600;done"
在docker-node2上若是重複執行上述命令,就會報以下錯誤了,從另外一方面也說明了etcd在發揮做用
docker: Error response from daemon: Conflict. The container name "/test1" is already in use by container "14d690ada81a4510eb5dd24286d061064e8131f549dcb5320f5b5b441cc49c40". You have to remove (or rename) that container to be able to reuse that name.
能夠改成
docker run -d --name test2 --network demo busybox /bin/sh -c "while true;do sleep 3600;done"
此時查看一下docker-node1與docker-node2分別對應的test1與test2容器的網絡
docker network inspect demo { "Containers": { "a9fa503852b26267d13b67067ba6c119eebcf0677f8365483b3b798dcab616a5": { "Name": "test1", "EndpointID": "3869243fcfb6a975fadf64dc31533161d2922137b18af0c3ee78c02f82b84def", "MacAddress": "", "IPv4Address": "10.0.0.2/24", "IPv6Address": "" }, "ep-8f8e30e36867c5b906bc84438049b2911660057fe031536a97efcf01d7d98699": { "Name": "test2", "EndpointID": "8f8e30e36867c5b906bc84438049b2911660057fe031536a97efcf01d7d98699", "MacAddress": "", "IPv4Address": "10.0.0.3/24", "IPv6Address": "" } } }
而後在test1上ping位於docker-node2上的test2容器是能夠ping通的
docker exec test1 ping 10.0.0.3
在test2上ping位於docker-node1上的test1容器也是能夠ping通的
docker exec test2 ping 10.0.0.2
成功實現了多機容器通訊的效果