Docker 安裝時會自動在
host
上建立三個網絡,咱們可用docker network ls
命令查看:linux[root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 0164da7ee66a bridge bridge local a4a5d0b84564 host host local df2c5c066a6a none null local
host模式,使用
docker run
時,使用--net=host
指定docker
使用的網絡實際上和宿主機同樣,啓動容器的時候使用host
模式,那麼這個容器將不會得到一個獨立的Network Namespace
,而是和宿主機共用一個Network Namespace
。容器將不會虛擬出本身的網卡,配置本身的IP
等,而是使用宿主機的IP
和端口。可是,容器的其餘方面,如文件系統、進程列表等仍是和宿主機隔離的。nginx
演示: [root@localhost ~]# docker run -it --rm --net=host --name net1 centos_1 bash [root@localhost /]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::42:5aff:fe52:25a9 prefixlen 64 scopeid 0x20<link> ether 02:42:5a:52:25:a9 txqueuelen 0 (Ethernet) RX packets 32541 bytes 45836190 (43.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 45025 bytes 305790826 (291.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.165 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::71bd:4770:36ed:a5df prefixlen 64 scopeid 0x20<link> ether 08:00:27:06:15:d8 txqueuelen 1000 (Ethernet) RX packets 690783 bytes 269935255 (257.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 164584 bytes 86989110 (82.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 5206 bytes 265735 (259.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5206 bytes 265735 (259.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
container
模式,使用--net=container:container_id/container_name
多個容器使用共同的網絡,這個模式指定新建立的容器和已經存在的一個容器共享一個Network Namespace
,而不是和宿主機共享。新建立的容器不會建立本身的網卡,配置本身的IP
,而是和一個指定的容器共享IP
、端口範圍等。一樣,兩個容器除了網絡方面,其餘的如文件系統、進程列表等仍是隔離的。兩個容器的進程能夠經過lo
網卡設備通訊。git
演示: ①建立一個net2的容器,並查看ip爲172.17.0.2 [root@localhost ~]# docker run -itd --name net2 centos_1 bash b8a14e5e8a670d5680aae830f79267257143397c124d011fbf09b71c59b37e5d [root@localhost ~]# docker exec -it net2 bash [root@b8a14e5e8a67 /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link> ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ②建立容器net3,並指定使用container網絡模式,查看net3容器的ip爲:172.17.0.2 [root@localhost ~]# docker run -it --net=container:net2 --name net3 centos_1 bash [root@b8a14e5e8a67 /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link> ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ③查看運行的net2,net3容器,二者id並不相同,但使用container網絡模式,進入到net3時,net3容器id會和net2相同 [root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a795f6825e1e centos_1 "bash" 6 minutes ago Up 3 seconds net3 b8a14e5e8a67 centos_1 "bash" 8 minutes ago Up 8 minutes net2 [root@localhost ~]# docker exec -it net3 bash [root@b8a14e5e8a67 /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link> ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
non
e模式,使用--net=none
指定這種模式下,不會配置任何網絡。使用none
模式,Docker
容器擁有本身的Network Namespace
,可是,並不爲Docker
容器進行任何網絡配置。也就是說,這個Docker
容器沒有網卡、IP、路由
等信息。須要咱們本身爲Docker
容器添加網卡、配置IP等。github
演示: [root@localhost ~]# docker run -it --net=none --name net4 centos_1 bash [root@b12e7ad03af2 /]# ifconfig lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bridge
模式,使用--net=bridge
指定默認模式,當Docker
進程啓動時,會在主機上建立一個名爲docker0
的虛擬網橋,此主機上啓動的Docker
容器會鏈接到這個虛擬網橋上。虛擬網橋的工做方式和物理交換機相似,這樣主機上的全部容器就經過交換機連在了一個二層網絡中。web從
docker0
子網中分配一個IP
給容器使用,並設置docker0
的IP地址爲容器的默認網關。在主機上建立一對虛擬網卡veth pair
設備,Docker
將veth pair
設備的一端放在新建立的容器中,並命名爲eth0
(容器的網卡),另外一端放在主機中,以vethxxx
這樣相似的名字命名,並將這個網絡設備加入到docker0
網橋中。能夠經過brctl show
命令查看。docker
bridge
模式是docker的默認網絡模式,不寫--net
參數,就是bridge
模式。使用docker run -p
時,docker
實際是在iptables
作了DNAT規則,實現端口轉發功能。可使用iptables -t nat -vnL
查看。vim
演示: ①查看宿主機docker0的虛擬網橋ip爲:172.17.0.1 [root@localhost ~]# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::42:5aff:fe52:25a9 prefixlen 64 scopeid 0x20<link> ether 02:42:5a:52:25:a9 txqueuelen 0 (Ethernet) RX packets 32557 bytes 45837262 (43.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 45025 bytes 305790826 (291.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.165 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::71bd:4770:36ed:a5df prefixlen 64 scopeid 0x20<link> ether 08:00:27:06:15:d8 txqueuelen 1000 (Ethernet) RX packets 702882 bytes 271309720 (258.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 166364 bytes 87203641 (83.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ②建立net5容器,並使用bridge網絡模式。查看ip和網關 [root@localhost ~]# docker run -it --name net5 --net=bridge centos_1 bash [root@a3a6416d08c0 /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.3 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::42:acff:fe11:3 prefixlen 64 scopeid 0x20<link> ether 02:42:ac:11:00:03 txqueuelen 0 (Ethernet) RX packets 6 bytes 508 (508.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6 bytes 508 (508.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@a3a6416d08c0 /]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
咱們可經過
bridge
驅動建立相似前面默認的bridge
網絡,例如:windows
[root@localhost ~]# docker network create --driver bridge my_net #建立橋接網絡my_net afb854fd239b26f95265002190f9df88f8b7f66c204085bfd16c6a2b4932f5d9 [root@localhost ~]# brctl show 查看一下當前 host 的網絡結構變化 bridge name bridge id STP enabled interfaces br-afb854fd239b 8000.02422702f1bc no docker0 8000.0242646f882f no veth211fb49 veth709c331 veth8069764 vethfa120d8 增了一個網橋 br-afb854fd239b,這裏 br-afb854fd239b 正好新建 bridge 網絡 my_net 的短 id。執行 docker network inspect 查看一下 my_net 的配置信息: [root@localhost ~]# docker network inspect my_net [ { "Name": "my_net", "Id": "afb854fd239b26f95265002190f9df88f8b7f66c204085bfd16c6a2b4932f5d9", "Created": "2018-04-21T14:14:15.479906429+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", #這裏 172.18.0.0/16 是 Docker 自動分配的 IP 網段。 "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Containers": {}, "Options": {}, "Labels": {} } ]
指定
--subnet
和--gateway
參數自定義ip
網段:centos
[root@localhost ~]# docker network create --driver bridge --subnet 192.168.100.0/24 --gateway 192.168.100.1 my_net2 889ba4ceb97290e440db559e104db2bf9273854fd789322aaea30b3c76937af6 [root@localhost ~]# docker network inspect my_net2 [ { "Name": "my_net2", "Id": "889ba4ceb97290e440db559e104db2bf9273854fd789322aaea30b3c76937af6", "Created": "2018-04-21T14:19:15.730480499+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.100.0/24", "Gateway": "192.168.100.1" } ] }, "Internal": false, "Attachable": false, "Containers": {}, "Options": {}, "Labels": {} } ] 建立了新的 bridge 網絡 my_net2,網段爲 192.168.100.0/24,網關爲 192.168.100.1。與前面同樣,網關在 my_net2 對應的網橋 br-889ba4ceb972 上: [root@localhost ~]# brctl show bridge name bridge id STP enabled interfaces br-889ba4ceb972 8000.02424b2256df no br-afb854fd239b 8000.02422702f1bc no docker0 8000.0242646f882f no veth211fb49 veth709c331 veth8069764 vethfa120d8 [root@localhost ~]# ifconfig br-889ba4ceb972 br-889ba4ceb972: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.100.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 02:42:4b:22:56:df txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
容器要使用新的網絡,須要在啓動時經過
--network
指定,而且還可使用--ip
參數直接指定一個靜態ip
:bash
[root@localhost ~]# docker run -it --network=my_net2 busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 116: eth0@if117: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:c0:a8:64:02 brd ff:ff:ff:ff:ff:ff inet 192.168.100.2/24 scope global eth0 ##容器被分配的ip爲192.168.100.2 valid_lft forever preferred_lft forever inet6 fe80::42:c0ff:fea8:6402/64 scope link valid_lft forever preferred_lft forever [root@localhost ~]# docker run -it --network=my_net2 --ip 192.168.100.100 busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 118: eth0@if119: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:c0:a8:64:64 brd ff:ff:ff:ff:ff:ff inet 192.168.100.100/24 scope global eth0 ##容器被分配的ip爲192.168.100.100 valid_lft forever preferred_lft forever inet6 fe80::42:c0ff:fea8:6464/64 scope link valid_lft forever preferred_lft forever 注:只有使用 --subnet 建立的網絡才能指定靜態 IP。 my_net 建立時沒有指定 --subnet,若是指定靜態 IP 報錯以下: [root@localhost ~]# docker run -it --rm --network=my_net --ip 172.18.0.100 busybox /usr/bin/docker-current: Error response from daemon: User specified IP address is supported only when connecting to networks with user configured subnets.
爲了使本地網絡中的機器和
Docker
容器更方便的通訊,咱們常常會有將Docker
容器配置到和主機同一網段的需求。這個需求其實很容易實現,咱們只要將Docker
容器和宿主機的網卡橋接起來,再給Docker
容器配上IP
就能夠了。
[root@localhost network-scripts]# cp ifcfg-enp0s3 ifcfg-br0 [root@localhost network-scripts]# vim ifcfg-br0 注:修改TYPE=Bridge,DEVICE=br0,NAME=br0 TYPE=Bridge BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=br0 #UUID=faa61166-b507-4992-b055-2c6284de3981 DEVICE=br0 ONBOOT=yes IPADDR=192.168.0.165 GATEWAY=192.168.0.1 NATMASK=255.255.255.0 DNS1=8.8.8.8 #NM_CONTROLLED=no [root@localhost network-scripts]# vim ifcfg-enp0s3 注:增長BRIDGE=br0,刪除IPADDR,GATEWAY,NETMASK,DNS TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=enp0s3 #UUID=faa61166-b507-4992-b055-2c6284de3981 DEVICE=enp0s3 ONBOOT=yes #IPADDR=192.168.0.165 #GATEWAY=192.168.0.1 #NATMASK=255.255.255.0 #DNS1=8.8.8.8 #NM_CONTROLLED=no BRIDGE=br0
[root@localhost network-scripts]# systemctl restart network [root@localhost network-scripts]# ifconfig br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.165 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::ca01:a411:fb77:c348 prefixlen 64 scopeid 0x20<link> ether 08:00:27:06:15:d8 txqueuelen 1000 (Ethernet) RX packets 36 bytes 3485 (3.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25 bytes 2318 (2.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0 ether 02:42:7e:ec:e1:e6 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 08:00:27:06:15:d8 txqueuelen 1000 (Ethernet) RX packets 2831 bytes 321711 (314.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1070 bytes 182494 (178.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 96 bytes 7888 (7.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 96 bytes 7888 (7.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@localhost ~]# git clone https://github.com/jpetazzo/pipework Cloning into 'pipework'... remote: Counting objects: 501, done. remote: Total 501 (delta 0), reused 0 (delta 0), pack-reused 501 Receiving objects: 100% (501/501), 172.97 KiB | 4.00 KiB/s, done. Resolving deltas: 100% (264/264), done.
[root@localhost ~]# cp pipework/pipework /usr/local/bin/ [root@localhost ~]# docker run -itd --net=none --name pipework centos_nginx bash ab88e2159ce32408154a776c1c62cf1af170fa8ce4d01908da6175f01b6c787d [root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ab88e2159ce3 centos_nginx "bash" 4 seconds ago Up 4 seconds pipework [root@localhost ~]# docker exec -it pipework bash [root@ab88e2159ce3 /]# ifconfig lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@ab88e2159ce3 /]# exit exit
[root@localhost ~]# pipework br0 pipework 192.168.0.166/24@192.168.0.1 [root@localhost ~]# docker exec -it pipework bash [root@ab88e2159ce3 /]# ifconfig eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.166 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::340c:ebff:fe50:1ba3 prefixlen 64 scopeid 0x20<link> ether 36:0c:eb:50:1b:a3 txqueuelen 1000 (Ethernet) RX packets 62 bytes 10518 (10.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 732 (732.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@ab88e2159ce3 /]# ping www.baidu.com PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data. 64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=54 time=8.29 ms 64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=54 time=8.09 ms 64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=54 time=8.43 ms 64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=4 ttl=54 time=8.12 ms 64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=5 ttl=54 time=8.80 ms 64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=6 ttl=54 time=8.51 ms ^C --- www.a.shifen.com ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 5007ms rtt min/avg/max/mdev = 8.094/8.378/8.805/0.249 ms
[root@localhost ~]# docker run -itd --privileged -e "container=docker" --name pipework --net=none centos_nginx /usr/sbin/init aa85a59dc347633fcd9a2b5206eaed619451c52f299d2505c32df2b6d1ce7521 [root@localhost ~]# pipework br0 pipework 192.168.0.166/24@192.168.0.1 [root@localhost ~]# docker exec -it pipework bash [root@aa85a59dc347 /]# ifconfig eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.166 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::a00d:aff:fec2:a59d prefixlen 64 scopeid 0x20<link> ether a2:0d:0a:c2:a5:9d txqueuelen 1000 (Ethernet) RX packets 192 bytes 21152 (20.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11 bytes 774 (774.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@aa85a59dc347 /]# systemctl start nginx [root@aa85a59dc347 /]# netstat -tulnp |grep nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 86/nginx: master pr tcp6 0 0 :::80 :::* LISTEN 86/nginx: master pr [root@aa85a59dc347 /]# ps -ef |grep nginx root 86 1 0 07:54 ? 00:00:00 nginx: master process /usr/sbin/nginx nginx 87 86 0 07:54 ? 00:00:00 nginx: worker process root 98 68 0 07:54 ? 00:00:00 grep --color=auto nginx