manager node: CentOS Linux release 7.4.1708 (Core)
html
workr node: CentOS Linux release 7.5.1804 (Core)
node
manager node: Docker version 18.09.4, build d14af54266
python
worker node: Docker version 19.03.1, build 74b1e89
mysql
manager node: 192.168.246.194
linux
worker node: 192.168.246.195
nginx
manager node: # docker network ls NETWOrk ID NAME DRIVER SCOPE e01d59fe00e5 bridge bridge local 15426f623c37 host host local dd5d570ac60e none null local
worker node: # docker network ls NETWOrk ID NAME DRIVER SCOPE 70ed15a24acd bridge bridge local e2da5d928935 host host local a7dbda3b96e8 none null local
manager node執行: docker swarm init
sql
worker node執行: docker swarm join --token SWMTKN-1-0p3g6ijmphmw5xrikh9e3asg5n3yzan0eomnsx1xuvkovvgfsp-enrmg2lj1dejg5igmnpoaywr1 192.168.246.194:2377
docker
說明⚠️:shell
若是遺忘了docker swarm join的命令,可使用下面命令查找:bash
(1)對於 worker 節點:docker swarm join-token worker
(2)對於 manager 節點:docker swarm join-token manager
manager node: # docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION hplz9lawfpjx6fpz0j1bevocp MyTest03 Ready Active 19.03.1 q5af6b67bmho8z0d7**m2yy5j * mysql-nginx Ready Active Leader 18.09.4
manager node: # docker network ls NETWOrk ID NAME DRIVER SCOPE e01d59fe00e5 bridge bridge local 7c90d1bf0f62 docker_gwbridge bridge local 15426f623c37 host host local 8lyfiluksqu0 ingress overlay swarm dd5d570ac60e none null local
worker node: # docker network ls NETWOrk ID NAME DRIVER SCOPE 70ed15a24acd bridge bridge local 985367037d3b docker_gwbridge bridge local e2da5d928935 host host local 8lyfiluksqu0 ingress overlay swarm a7dbda3b96e8 none null local
說明⚠️:
在docker swarm集羣建立的開始,docker 會給每臺host建立除了docker0之外的兩個網絡,分是bridge類型(docker_gwbridge網橋
)和overlay類型(ingress
)的網絡,以及一個過渡的命名空間ingress_sbox
,咱們可使用以下命令在 manager節點自建overlay網絡,結果以下:
docker network create -d overlay uber-svc
再次查看 manager 和 worker 兩臺主機 docker swarm 集羣網絡:
manager node: # docker network ls NETWOrk ID NAME DRIVER SCOPE e01d59fe00e5 bridge bridge local 7c90d1bf0f62 docker_gwbridge bridge local 15426f623c37 host host local 8lyfiluksqu0 ingress overlay swarm dd5d570ac60e none null local kzxwwwtunpqe uber-svc overlay swarm ===> 這個 network 就是咱們剛新建的 uber-svc
worker node: # docker network ls NETWOrk ID NAME DRIVER SCOPE 70ed15a24acd bridge bridge local 985367037d3b docker_gwbridge bridge local e2da5d928935 host host local 8lyfiluksqu0 ingress overlay swarm a7dbda3b96e8 none null local
說明⚠️:
咱們會發如今 worker node上並無 uber-svc 網絡。這是由於只有當運行中的容器鏈接到覆蓋網絡的時候,該網絡才變爲可用狀態。這種延遲生效策略經過減小網絡梳理,提高了網絡的擴展性。
manager node: # ip netns 1-8lyfiluksq (id: 0) ingress_sbox (id: 1)
worker node: # ip netns 1-8lyfiluksq (id: 0) ingress_sbox (id: 1)
說明⚠️:
(1)因爲容器和overlay的網絡的網絡命名空間文件再也不操做系統默認的/var/run/netns下,只能手動經過軟鏈接的方式查看。ln -s /var/run/docker/netns /var/run/netns
。
(2)有時候網絡的網絡命名空間名稱前面會帶上1-、2-等序號,有時候不帶。但不影響網絡的通訊和操做。
(1)ingress網絡的IPAM( IP Address Management)分配以下:
manager node 和 worker node 是相同的: # docker network inspect ingress [ { "Name": "ingress", "Id": "8lyfiluksqu09jfdjndhj68hl", "Created": "2019-09-09T17:59:06.326723762+08:00", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.255.0.0/16", ===> ingress子網 "Gateway": "10.255.0.1" ===> ingress網關 }
(2)uber-svc自建的overlay會使用docker自動分配的IPAM:
# docker network inspect uber-svc [ { "Name": "uber-svc", "Id": "kzxwwwtunpqeucnrhmirg6rhm", "Created": "2019-09-09T10:14:06.606521342Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.0.0.0/24", ===> uber-svc子網 "Gateway": "10.0.0.1" ===> uber-svc網關 }
(1)Ingress Load Balancing
(2)Internal Load Balancing
說明⚠️:咱們本節重點聊聊 LB 的第二種狀況,即Internal Load Balancing~
在開始下面的實踐以前,咱們先編輯如下兩個腳本。對於腳本的使用,我會給出具體實例~
docker_netns.sh:
#!/bin/bash NAMESPACE=$1 if [[ -z $NAMESPACE ]];then ls -1 /var/run/docker/netns/ exit 0 fi NAMESPACE_FILE=/var/run/docker/netns/${NAMESPACE} if [[ ! -f $NAMESPACE_FILE ]];then NAMESPACE_FILE=$(docker inspect -f "{{.NetworkSettings.SandboxKey}}" $NAMESPACE 2>/dev/null) fi if [[ ! -f $NAMESPACE_FILE ]];then echo "Cannot open network namespace '$NAMESPACE': No such file or directory" exit 1 fi shift if [[ $# -lt 1 ]]; then echo "No command specified" exit 1 fi nsenter --net=${NAMESPACE_FILE} $@
說明⚠️:
(1)該腳本經過指定容器id、name或者namespace快速進入容器的network namespace並執行相應的shell命令。
(2)若是不指定任何參數,則列舉全部Docker容器相關的network namespaces。
執行腳本結果以下:
# sh docker_netns.sh ==> 列出全部的網絡命名空間 1-ycqv46f5tl 8402c558c13c ingress_sbox
# sh docker_netns.sh deploy_nginx_nginx_1 ip r ==> 進入查看名爲deploy_nginx_nginx_1容器ip信息 default via 172.18.0.1 dev eth0 172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2
# sh docker_netns.sh 8402c558c13c ip r ==> 進入和查看網絡命名空間爲8402c558c13c容器ip信息 default via 172.18.0.1 dev eth0 172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2
find_links.sh:
#!/bin/bash DOCKER_NETNS_SCRIPT=./docker_netns.sh IFINDEX=$1 if [[ -z $IFINDEX ]];then for namespace in $($DOCKER_NETNS_SCRIPT);do printf "\e[1;31m%s:\e[0m" $namespace $DOCKER_NETNS_SCRIPT $namespace ip -c -o link printf " " done else for namespace in $($DOCKER_NETNS_SCRIPT);do if $DOCKER_NETNS_SCRIPT $namespace ip -c -o link | grep -Pq "^$IFINDEX: ";then printf "\e[1;31m%s:\e[0m" $namespace $DOCKER_NETNS_SCRIPT $namespace ip -c -o link | grep -P "^$IFINDEX: "; printf " " fi done fi
該腳本根據ifindex查找虛擬網絡設備所在的namespace,腳本不一樣狀況下執行結果以下:
# sh find_links.sh ==> 不指定ifindex,則列出全部namespaces的link設備。 # sh find_links.sh 1-3gt8phomoc:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1\ link/ipip 0.0.0.0 brd 0.0.0.0 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default \ link/ether e6:c5:04:ad:7b:31 brd ff:ff:ff:ff:ff:ff 74: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default \ link/ether e6:c5:04:ad:7b:31 brd ff:ff:ff:ff:ff:ff link-netnsid 0 76: veth0@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default \ link/ether e6:fa:db:53:40:fd brd ff:ff:ff:ff:ff:ff link-netnsid 1 ingress_sbox:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1\ link/ipip 0.0.0.0 brd 0.0.0.0 75: eth0@if76: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default \ link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 78: eth1@if79: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default \ link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
# sh find_links.sh 76 ==> 指定ifindex=76 1-3gt8phomoc:76: veth0@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default \ link/ether e6:fa:db:53:40:fd brd ff:ff:ff:ff:ff:ff link-netnsid 1
docker service create --name uber-svc --network uber-svc -p 80:80 --replicas 2 nigelpoulton/tu-demo:v1
部署的這兩個容器分別處於 manager 和 worker 節點上:
# docker service ls ID NAME MODE REPLICAS IMAGE PORTS pfnme5ytk59w uber-svc replicated 2/2 nigelpoulton/tu-demo:v1 *:80->80/tcp
# docker service ps uber-svc ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS kh8zs9a2umwf uber-svc.1 nigelpoulton/tu-demo:v1 mysql-nginx Running Running 57 seconds ago 31p0rgg1f59w uber-svc.2 nigelpoulton/tu-demo:v1 MyTest03 Running Running 49 seconds ago
說明⚠️:
-p
固然你也可使用--publish
代替-p
,在這裏的用意是將容器內部的服務暴露到host上,這樣咱們就能夠訪問這個services。
通常狀況下咱們在swarm中部署service後容器中的網絡只有一張網卡使用的是docker0網絡,當咱們將服務發佈出去後,swarm會作以下操做:
(1)給容器添加三塊網卡eth0和eth1,eth2,eth0鏈接overlay類型網絡名爲ingress用於在不一樣主機間通訊,eth1鏈接bridge類網絡名爲docker_gwbridge,用於讓容器能訪問外網。eth2鏈接到咱們本身建立的mynet網絡上,一樣的做用也是用於容器之間的訪問(區別於eth2網絡存在dns解析即服務發現功能
)。
(2)swarm各節點會利用ingress overlay網絡負載均衡將服務發佈到集羣以外。
(1)先查看 uber-svc.1 容器
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a2a763734e42 nigelpoulton/tu-demo:v1 "python app.py" About a minute ago Up About a minute 80/tcp uber-svc.1.kh8zs9a2umwf9cix381zr9x38
(2)查看 uber-svc.1 容器中網卡狀況
# sh docker_netns.sh uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 54: eth0@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 02:42:0a:ff:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.255.0.5/16 brd 10.255.255.255 scope global eth0 valid_lft forever preferred_lft forever 56: eth2@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet 172.19.0.3/16 brd 172.19.255.255 scope global eth2 valid_lft forever preferred_lft forever 58: eth1@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet 10.0.0.3/24 brd 10.0.0.255 scope global eth1 valid_lft forever preferred_lft forever
固然,你也能夠直接使用下面命令查看:
docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr
(3)查看 uber-svc 網絡命名空間的網卡
# ip netns ==> 查看 manager 網絡命名空間 d2feb68e3183 (id: 3) 1-kzxwwwtunp (id: 2) lb_kzxwwwtun 1-8lyfiluksq (id: 0) ingress_sbox (id: 1)
# docker network ls ==> 查看 manager 集羣網絡 NETWOrk ID NAME DRIVER SCOPE e01d59fe00e5 bridge bridge local 7c90d1bf0f62 docker_gwbridge bridge local 15426f623c37 host host local 8lyfiluksqu0 ingress overlay swarm dd5d570ac60e none null local kzxwwwtunpqe uber-svc overlay swarm
sh docker_netns.sh 1-kzxwwwtunp ip addr ==> 查看 uber-svc 網絡命名空間的網卡 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 3e:cb:12:d3:a3:cb brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global br0 valid_lft forever preferred_lft forever 51: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN group default link/ether e2:8e:35:4c:a3:7b brd ff:ff:ff:ff:ff:ff link-netnsid 0 53: veth0@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default link/ether 3e:cb:12:d3:a3:cb brd ff:ff:ff:ff:ff:ff link-netnsid 1 59: veth1@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default link/ether 9e:b4:8c:72:4e:74 brd ff:ff:ff:ff:ff:ff link-netnsid 2
固然,你也可使用下面的命令:
ip netns exec 1-kzxwwwtunp ip addr
# ip netns exec 1-kzxwwwtunp brctl show ==> 查看 uber-svc 網絡命名空間的接口狀況 bridge name bridge id STP enabled interfaces br0 8000.3ecb12d3a3cb no veth0 veth1 vxlan0
說明⚠️:
<1> docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr
這條命令能夠看到 manager 節點上容器的網絡有四張網卡,分別是:lo、eth0、eth1 和 eth2。
其中,eth1 對應的 veth pair爲 uber-svc 網絡中的veth1,eth2 對應的 veth pair爲 host 上的vethef74971。
<2> ip netns exec 1-kzxwwwtunp brctl show
查看 uber-svc 網絡空間下網橋掛載狀況能夠看出veth1掛到了br0網橋上.
(4)查看 uber-svc 網絡的vxlan-id
ip netns exec 1-kzxwwwtunp ip -o -c -d link show vxlan0 ***** vxlan id 4097 *****
主要步驟以下:
(1)獲取 ingress 的network信息
# docker network ls NETWOrk ID NAME DRIVER SCOPE 8lyfiluksqu0 ingress overlay swarm
(2)獲取取 ingress 的命名空間信息
# ip netns 1-8lyfiluksq (id: 0)
(3)獲取 ingress 的命名空間中ip信息
# sh docker_netns.sh 1-8lyfiluksq ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 6e:5c:bd:c0:95:ea brd ff:ff:ff:ff:ff:ff inet 10.255.0.1/16 brd 10.255.255.255 scope global br0 valid_lft forever preferred_lft forever 45: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN group default link/ether e6:f3:7a:00:85:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 0 47: veth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default link/ether fa:98:37:aa:83:2a brd ff:ff:ff:ff:ff:ff link-netnsid 1 55: veth1@if54: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default link/ether 6e:5c:bd:c0:95:ea brd ff:ff:ff:ff:ff:ff link-netnsid 2
(4)獲取 ingress 的命名空間中vxlan0的ID信息
# sh docker_netns.sh 1-8lyfiluksq ip -d link show vxlan0 ***** vxlan id 4096 *****
(5)獲取 ingress 的命名空間中對應 veth pair 信息
# sh find_links.sh 46 ingress_sbox:46: eth0@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default \ link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
主要步驟以下:
(1)獲取 ingress_sbox 的ip信息
# sh docker_netns.sh ingress_sbox ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 46: eth0@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.255.0.2/16 brd 10.255.255.255 scope global eth0 valid_lft forever preferred_lft forever inet 10.255.0.4/32 brd 10.255.0.4 scope global eth0 valid_lft forever preferred_lft forever 49: eth1@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet 172.19.0.2/16 brd 172.19.255.255 scope global eth1 valid_lft forever preferred_lft forever
(2)獲取 ingress_sbox 的veth pair 接口信息
# sh find_links.sh 47 1-8lyfiluksq:47: veth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default \ link/ether fa:98:37:aa:83:2a brd ff:ff:ff:ff:ff:ff link-netnsid 1
(3)獲取 manager 主機上veth pair 接口信息
# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:25:8b:ac brd ff:ff:ff:ff:ff:ff 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:cf:31:ee:03 brd ff:ff:ff:ff:ff:ff 14: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 48: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:9c:aa:15:e6 brd ff:ff:ff:ff:ff:ff 50: vetheaa661b@if49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default link/ether 8a:3e:01:ab:db:75 brd ff:ff:ff:ff:ff:ff link-netnsid 1 57: vethef74971@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default link/ether 82:5c:65:e1:9c:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 3
說明:swarm worker 節點上的狀況與 manager 基本思路同樣~
說明⚠️:
(1)能夠看到這裏ingress_sbox和建立容器的ns共用一個ingress網絡空間。
(2)經過使用docker exec [container ID/name] ip r
會更加直觀的看到網絡流動狀況,以下:
# docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip r default via 172.19.0.1 dev eth2 10.0.0.0/24 dev eth1 proto kernel scope link src 10.0.0.3 10.255.0.0/16 dev eth0 proto kernel scope link src 10.255.0.5 172.19.0.0/16 dev eth2 proto kernel scope link src 172.19.0.3
由此可知容器默認網關爲172.19.0.1,也就是說容器是經過eth2出去的~
關於 Docker Swarm 底層網絡問題還有不少的知識點須要去探究,本節對最近學習到的docker network 作了一個基礎總結,若有錯誤或不足,請各位大佬指正,感謝!
另:參考文檔若有侵權,請及時與我聯繫,立刪~。
最後,感謝開源,擁抱開源!