安裝好docker引擎的主機上會多出一個虛擬的網絡設備docker0
,其IP地址爲172.17.0.1
,能夠把它看做是一個虛擬的交換機(網橋),當建立一個容器時(默認的網絡方式爲brigde)會同時建立一個虛擬的網絡鏈接
,一端鏈接在容器內,另外一端則鏈接在docker0
這個虛擬交換機上。容器內的虛擬網卡默認分配的IP爲172.17.0.0/16
網段內。html
root@node01:~# docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f705f6f4779a busybox:latest "sh" 7 minutes ago Up 7 minutes bbox01 83436ed405c7 busybox-httpd:v0.2 "/bin/httpd -f -h /d…" 45 minutes ago Up 45 minutes httpd-01 # 安裝網橋管理工具 root@node01:~# apt-get install bridge-utils root@node01:~# brctl show # 查看網橋 bridge name bridge id STP enabled interfaces docker0 8000.02425749873b no veth9cb81f9 veth9f1b4f7 root@node01:~# ip link show ... 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:57:49:87:3b brd ff:ff:ff:ff:ff:ff 13: veth9f1b4f7@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 26:8d:9e:92:aa:a6 brd ff:ff:ff:ff:ff:ff link-netnsid 0 21: veth9cb81f9@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 1a:94:6b:46:8a:8c brd ff:ff:ff:ff:ff:ff link-netnsid 1
容器內若是想訪問宿主機外的資源則會進行地址假裝,默認是使用iptable實現的node
oot@node01:~# iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 21 packets, 2248 bytes) pkts bytes target prot opt in out source destination 4 256 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 18 packets, 2046 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1545 packets, 116K bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 1545 packets, 116K bytes) pkts bytes target prot opt in out source destination 3 202 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
其中nginx
Chain POSTROUTING (policy ACCEPT 1545 packets, 116K bytes) pkts bytes target prot opt in out source destination 3 202 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
表示從172.17.0.0/16
網絡裏的任何地址來源的數據,想訪問非從docker0設備出去的資源,即訪問宿主機之外的資源都將作MASQUERADE。docker
第一種:Closed container, 封閉式容器,表示此種容器只有Loopback迴環地址,不能進行網絡相關的請求json
第二種:Bridged container,橋接式網絡,這是建立容器時默認的網絡方式vim
第三種:Joined container,聯盟式網絡,表示多個容器共享UTC,IPC,NET三個名稱空間,即多個容器具備相同的主機名,相同的網絡設備安全
第四種:Open container,開放式網絡,共享宿主機的網絡名稱空間服務器
爲了避免影響node01
上的環境,另開一主機node02
。先建立兩個網絡名稱空間網絡
root@node02:~# ip netns add ns01 root@node02:~# ip netns add ns02 root@node02:~# ip netns list ns02 ns01
建立一對虛擬網絡設備socket
root@node02:~# ip link add name veth1.1 type veth peer name veth1.2 root@node02:~# ip link show type veth 3: veth1.2@veth1.1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 4a:1c:b7:38:0f:5e brd ff:ff:ff:ff:ff:ff 4: veth1.1@veth1.2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 36:72:d3:88:4c:5d brd ff:ff:ff:ff:ff:ff
分配一個虛擬網卡給ns01
名稱空間
root@node02:~# ip link set dev veth1.2 netns ns01 root@node02:~# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:aa:9b:4f brd ff:ff:ff:ff:ff:ff 4: veth1.1@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 36:72:d3:88:4c:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 # 查看ns01名稱空間的網絡設備 root@node02:~# ip netns exec ns01 ifconfig -a lo: flags=8<LOOPBACK> mtu 65536 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth1.2: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether 4a:1c:b7:38:0f:5e txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 root@node02:~# ip netns exec ns01 ip link set dev veth1.2 name eth0 # 還能夠修改設備名稱 root@node02:~# ip netns exec ns01 ifconfig -a eth0: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether 4a:1c:b7:38:0f:5e txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=8<LOOPBACK> mtu 65536 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
如今宿主機上只有veth1.1
這個虛擬網卡,veth1.2
則被移動到了ns01
名稱空間。
給兩個虛擬設備配置IP地址並激活
root@node02:~# ifconfig veth1.1 10.0.0.1/24 up root@node02:~# ip netns exec ns01 ifconfig eth0 10.0.0.2/24 up root@node02:~# ip netns exec ns01 ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.2 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::481c:b7ff:fe38:f5e prefixlen 64 scopeid 0x20<link> ether 4a:1c:b7:38:0f:5e txqueuelen 1000 (Ethernet) RX packets 9 bytes 726 (726.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 656 (656.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 root@node02:~# ifconfig veth1.1 veth1.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::3472:d3ff:fe88:4c5d prefixlen 64 scopeid 0x20<link> ether 36:72:d3:88:4c:5d txqueuelen 1000 (Ethernet) RX packets 10 bytes 796 (796.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 796 (796.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
測試不一樣名稱空間的虛擬網卡的連通性
root@node02:~# ping 10.0.0.2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.059 ms 64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.091 ms 64 bytes from 10.0.0.2: icmp_seq=5 ttl=64 time=0.058 ms ^C --- 10.0.0.2 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4031ms rtt min/avg/max/mdev = 0.043/0.062/0.091/0.015 ms root@node02:~# ip netns exec ns01 ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.048 ms 64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.087 ms 64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.084 ms ^C --- 10.0.0.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3040ms rtt min/avg/max/mdev = 0.020/0.059/0.087/0.029 ms
也可把宿主機上的veth1.1
移動到ns02
名稱空間中
root@node02:~# ip link set dev veth1.1 netns ns02 root@node02:~# ip netns exec ns02 ifconfig -a lo: flags=8<LOOPBACK> mtu 65536 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth1.1: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether 36:72:d3:88:4c:5d txqueuelen 1000 (Ethernet) RX packets 23 bytes 1874 (1.8 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 23 bytes 1874 (1.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 # 移動後IP地址信息丟失,須要從新設置 root@node02:~# ip netns exec ns02 ifconfig veth1.1 10.0.0.3/24 up root@node02:~# ip netns exec ns02 ifconfig veth1.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.3 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::3472:d3ff:fe88:4c5d prefixlen 64 scopeid 0x20<link> ether 36:72:d3:88:4c:5d txqueuelen 1000 (Ethernet) RX packets 25 bytes 2054 (2.0 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 42 bytes 3048 (3.0 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 root@node02:~# ip netns exec ns02 ping 10.0.0.2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.061 ms 64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.060 ms ^C --- 10.0.0.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2030ms rtt min/avg/max/mdev = 0.060/0.084/0.132/0.034 ms
先整理一個運行容器時的一些選項
root@node01:~# docker container run \ --name bbox-03 \ -i \ -t \ --network bridge \ --hostname bbox03.learn.io \ --add-host b.163.com:1.1.1.1 \ --add-host c.163.com:2.2.2.2 \ --dns 114.114.114.114 \ --dns 8.8.8.8 \ --rm \ busybox:latest
--network 指定容器使用的網絡模型,none, host, bridge,默認爲bridge --hostname 指定容器的主機名,若是不指定爲容器的ID --add-host 爲容器的/etc/hosts增長一條解析記錄,能夠屢次使用 --dns 爲容器設置dns服務器,能夠屢次使用 --rm 表示退出容器後自動刪除容器
服務暴露有4種方式
docker container run -p <containerPort>
將指定容器端口映射至宿主機全部地址的一個動態端口
docker container run -p <hostPort>:<containerPort>
將容器端口映射至宿主機全部地址的指定端口
docker container run -p <ip>::<containerPort>
將容器端口映射至宿主機指定IP的動態端口
docker container run -p <ip>:<hostPort>:<containerPort>
將容器端口映射至宿主機指定IP的指定端口
若是要暴露多個端口,-p
能夠使用屢次
root@node01:~# docker container run -i -t --name httpd-01 --rm -p 80 busybox-httpd:v0.2 root@node01:~# docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3708cbbc6a99 busybox-httpd:v0.2 "/bin/httpd -f -h /d…" 10 seconds ago Up 9 seconds 0.0.0.0:32768->80/tcp httpd-01 root@node01:~# docker port httpd-01 # 查看端口映射狀況 80/tcp -> 0.0.0.0:32768
-p 80:80
時
root@node01:~# docker port httpd-01 80/tcp -> 0.0.0.0:80
-p 192.168.101.40::80
時
root@node01:~# docker port httpd-01 80/tcp -> 192.168.101.40:32768
-p 192.168.101.40:8080:80
時
root@node01:~# docker port httpd-01 80/tcp -> 192.168.101.40:8080
多個docker容器能夠共享網絡名稱空間,即多個容器共用網絡設備。
先基於busybox:latest
鏡像運行一個容器
root@node01:~# docker container run -i -t --rm --hostname b1 --name bbox-01 busybox:latest / # hostname b1 / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1116 (1.0 KiB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
另起終端再運行一個容器,增長--network container:bbox-01
選項
root@node01:~# docker container run -i -t --rm --hostname b2 --name bbox-02 --network container:bbox-01 busybox:latest docker: Error response from daemon: conflicting options: hostname and the network mode. See 'docker run --help'. root@node01:~# docker container run -i -t --rm --name bbox-02 --network container:bbox-01 busybox:latest / # hostname b1 / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1116 (1.0 KiB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
發現bbox-01
與bbox-02
兩個容器的網絡地址徹底相同。並且使用了--network container:bbox-01
選項後與--hostname
是相沖突的,兩容器的hostname也是相同的。兩容器共用了網絡名稱空間
和主機名名稱空間
。
爲了進一步驗證兩容器共享網絡名稱空間,在第一個終端運行的容器中啓用一個httpd服務
/ # echo "Hello Word." > /tmp/index.html / # httpd -h /tmp / # netstat -tan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 :::80 :::* LISTEN
再到第二個終端的容器中查看網絡監聽
# netstat -tanl Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 :::80 :::* LISTEN / # wget -O - -q http://localhost Hello Word. / #
一樣有監聽了80端口。
既然兩個容器間能夠共享網絡名稱空間,那容器也能夠共享宿主機的網絡
root@node01:~# docker container run -i -t --rm --name bbox-04 --network host busybox:latest / # hostname node01 / # ifconfig docker0 Link encap:Ethernet HWaddr 02:42:57:49:87:3B inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:57ff:fe49:873b/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:39 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:927 (927.0 B) TX bytes:3376 (3.2 KiB) ens33 Link encap:Ethernet HWaddr 00:0C:29:96:48:2C inet addr:192.168.101.40 Bcast:192.168.101.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe96:482c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:34294 errors:0 dropped:0 overruns:0 frame:0 TX packets:15471 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22539440 (21.4 MiB) TX bytes:1727705 (1.6 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:290 errors:0 dropped:0 overruns:0 frame:0 TX packets:290 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:28034 (27.3 KiB) TX bytes:28034 (27.3 KiB)
獲取到的主機名,網絡設備都是宿主機的。這樣在容器內運行一個服務監聽一端口,那外部經過訪問宿主機的網絡地址就能夠訪問到,這樣的作的好處在於程序就打包在容器裏,而網絡使用宿主機的網絡,若是宿主機損壞或須要部署多個程序,只須要把鏡像copy到其餘運行有docker引擎的主機後直接run起來就行,部署變得簡單。
默認狀況下虛擬設備docker0
的地址是172.17.0.1
,容器分配的子網地址爲172.17.0.0/16
,容器默認的nameserver爲宿主機使用的nameserver,默認網關指向docker0
的ip地址,這些信息均可以自定義設置。
# 自定義docer0橋的網絡屬性: /etc/docker/daemon.json 文件 { "bip": "10.1.0.1/16", "fixed-cidr": "10.1.0.0/16", "fixed-cidr-v6": "", "mtu": 1500, "default-gateway": "", "default-gateway-v6": "", "dns": ["",""] }
最核心的是bip
即bridge ip
,其餘的大多均可以經過計算得出。若是要修改docker0的網絡地址及容器分配的ip地址,只修改bip
,而後從新啓動docker進程。
方法一
dockerd
守護進程的C/S
模型,其默認監聽unix socket
格式的地址,位置在/var/run/docker.sock
,若是要使用TCP套接字,在/etc/docker/daemon.json
中增長hosts
這個key
"hosts" ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
root@node01:~# vim /etc/docker/daemon.json { "registry-mirrors": [ "https://1nj0zren.mirror.aliyuncs.com", "https://docker.mirrors.ustc.edu.cn", "http://registry.docker-cn.com" ], "insecure-registries": [ "docker.mirrors.ustc.edu.cn" ], "debug": true, "experimental": true, "hosts": ["unix:///var/run/docker.sock","tcp://0.0.0.0:2375"] }
關閉dockerd
root@node01:/lib/systemd/system# systemctl stop docker Warning: Stopping docker.service, but it can still be activated by: docker.socket
有個警告信息,嘗試啓動失敗
root@node01:/lib/systemd/system# systemctl start docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
修改/lib/systemd/system/docker.service
文件
[Service] ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 修改成 ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
root@node01:/lib/systemd/system# systemctl daemon-reload # docker.service更改後須要從新加載 root@node01:/lib/systemd/system# systemctl start docker root@node01:/lib/systemd/system# ss -tanl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:* LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 *:2375 *:* LISTEN 0 128 [::]:22 [::]:*
2375已監聽。但中止docker可能有警告信息,不知有何影響
root@node01:/lib/systemd/system# systemctl stop docker Warning: Stopping docker.service, but it can still be activated by: docker.socket root@node01:/lib/systemd/system# systemctl start docker root@node01:/lib/systemd/system# ss -tanl | grep 2375 LISTEN 0 128 *:2375 *:*
在node2上調用docker命令操做node1上的資源
root@node02:~# docker -H 192.168.101.40:2375 image ls REPOSITORY TAG IMAGE ID CREATED SIZE busybox-httpd v0.2 985f056d206d 12 hours ago 1.22MB zhaochj/httpd v0.1 985f056d206d 12 hours ago 1.22MB busybox-httpd v0.1 806601ab5565 12 hours ago 1.22MB nginx stable-alpine 8c1bfa967ebf 7 days ago 21.5MB busybox latest c7c37e472d31 2 weeks ago 1.22MB quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 4 months ago 52.8MB
方法二
更多信息請參考:https://docs.docker.com/engine/reference/commandline/dockerd/
直接修改/lib/systemd/system/docker.service
文件,不用去修改/etc/docker/daemon.json
文件
[Service] ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 修改成 ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock
root@node01:/lib/systemd/system# systemctl daemon-reload root@node01:/lib/systemd/system# systemctl stop docker root@node01:/lib/systemd/system# systemctl start docker root@node01:/lib/systemd/system# ss -tanl | grep 2375 LISTEN 0 128 *:2375 *:*
監聽在網絡套接字上docker認爲這是有潛在風險,不安全的,不建議開啓。
docker默認有三種類型的網絡,bridge
,host
和null
oot@node01:~# docker network ls NETWORK ID NAME DRIVER SCOPE febf7e1c8a24 bridge bridge local 55a2dbbe4f79 host host local 0f6300f03935 none null local
其實docker還支持其餘類型的網絡,只是運用不普遍
root@node01:/lib/systemd/system# docker info | grep -A2 Plugins Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay
docker默認建立了一個bridge
類型的docker0
網橋設備,經過docker命令也可本身建立指定類型的設備
root@node01:~# docker network create -d bridge --subnet "172.20.0.0/16" mybr01 b54725d7f0afe69635db7417105d73f00b5a2f4062a051074d72b2b5e41b870e root@node01:~# ifconfig br-b54725d7f0af: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.20.0.1 netmask 255.255.0.0 broadcast 172.20.255.255 ether 02:42:16:8a:ac:38 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ... root@node01:~# docker network ls NETWORK ID NAME DRIVER SCOPE 60b077384fab bridge bridge local 55a2dbbe4f79 host host local b54725d7f0af mybr01 bridge local 0f6300f03935 none null local
不要對生成的設備名「br-b54725d7f0af」進行ip link set
修改,不然容器關聯到相應的網絡時會報找不到橋設備的錯誤。
建立一個容器使用新的網絡
root@node01:~# docker container run -i -t --name test01 --rm --network mybr01 busybox:latest / # ip route default via 172.20.0.1 dev eth0 172.20.0.0/16 dev eth0 scope link src 172.20.0.2 / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:14:00:02 inet addr:172.20.0.2 Bcast:172.20.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1811 (1.7 KiB) TX bytes:426 (426.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:379 (379.0 B) TX bytes:379 (379.0 B)