1、docker網絡簡介
node
網絡做爲docker容器化實現的6個名稱空間的其中之一,是必不可少的。其在Linux內核2.6時已經被加載進內核支持了。網絡名稱空間主要用於實現網絡設備和協議棧的隔離,列如;一個docker host有4塊網卡,在建立容器的時候,將其中一塊網卡分配給該名稱空間,那麼其餘名稱空間是看不到這塊網卡的。且:一個設備只能屬於一個名稱空間。由於一個名稱空間綁定一個物理網卡和外界通訊,且一個物理網卡不能分配多個名稱空間,這使得咱們只能建立4個名稱空間。若是要建立的名稱空間多於咱們的物理網卡數量,那該怎麼辦呢?linux
一、 虛擬網絡通訊的三種方式nginx
1.一、橋接網絡:在kvm的虛擬網絡中,咱們使用的是虛擬網卡設備(用純軟件的方式來模擬一組設備來使用),而在docker中,也不例外。在Linux內核級,支持兩種級別設備的模擬,分別是2層設備(工做在鏈路層能實現封裝物理報文並在各網絡設備中報文轉發的組件);而這個功能,是能夠在Linux上利用內核中對二層虛擬設備的支持建立虛擬網卡接口的。並且,這種虛擬網卡接口很是獨特,每個網絡接口設備是成對出現的,能夠模擬一根網線的兩端,其中,一端能夠插在主機上,另外一端能夠插在交換機上。這就至關於讓一個主機鏈接到一個交換機上了。而Linux內核原生支持二層虛擬網橋設備(用軟件來構建一個交換機)。例如;我有兩個名稱空間,都分別使用虛擬網絡建立一對網絡接口,一頭插在名稱空間上,另外一頭插在虛擬網橋設備上,而且兩個名稱空間配置在同一個網段上,這樣就實現了容器間的通訊,可是這種橋接方式,若是用在有N多個容器的網絡中,因爲全部容器所有是橋接在同一塊虛擬網橋設備上,會產生廣播風暴,在隔離上也是極爲不易的,所以在規模容器的場景中,使用橋接這種方式無疑是自討苦吃,不然都不該該直接橋接的。web
1.二、nat網絡:若是不橋接,又能與外部通訊,用的是nat技術。NAT(network address transfer)網絡地址轉換,就是替換IP報文頭部的地址信息,經過將內部網絡IP地址替換爲出口的IP地址提供不一樣網段的通訊。好比:兩個容器都配置了不一樣的私網地址,而且爲容器配置了虛擬網橋(虛擬交換機),把容器1的網關指向虛擬網橋的IP地址,然後在docker host上打開核心轉發功能,這時,當容器1與容器2通訊時,報文先送給各自的虛擬網橋經由內核,內核斷定目的IP不是本身,會查詢路由表,然後將報文送給對應的網卡,物理網卡收到報文以後報文的原地址替換成本身的IP(這個操做稱爲snat),再將報文發送給容器2的物理網卡,物理網卡收到報文後,會將報文的原IP替換爲本身的IP(這個操做稱做dnat)發送給虛擬交換機,最後在發送給容器2。容器2收到報文以後,一樣的也要通過相同的操做,將回復報文通過改寫原ip地址的操做(snat和dnat)送達給容器1的物理網卡,物理網卡收到報文以後在將報文轉發給虛擬網橋送給容器1。在這種網絡中,若是要跨物理主機,讓兩個容器通訊,必須通過兩次nat(snat和dnat),形成了通訊效率的低下。在多容器的場景中也不適合。redis
1.三、Overlay Networkdocker
疊加網絡,在這種網絡中,不一樣主機的容器通訊會藉助於一個虛擬網橋,讓當前主機的各個容器鏈接到這個虛擬網橋上來,隨後,他們通訊時,藉助物理網絡,來完成報文的隧道轉發,從而能夠實現容器能夠直接看到不一樣主機的其餘容器,進而互相通訊。例如;容器1要和其餘host上的容器2通訊,容器1會把報文發送給虛擬網橋,虛擬網橋發現目的IP不在本地物理服務器上,因而這個報文會從物理網卡發送出去,在發出去以前不在作snat,而是在添加一層IP報頭,原地址是容器1的物理網卡地址,目的地址是容器2所在主機的物理網卡地址。報文到達主機,主機拆完第一層數據報文,發現還有一層報頭,而且IP地址是當前主機的容器地址,進而將報文發送給虛擬網橋,最後在發送給容器2。這種用一個IP來承載另一個IP的方式叫作隧道。apache
二、docker支持的四種網絡模型json
2.一、Closed container:只有loop接口,就是null類型vim
2.二、Bridged container A:橋接式類型,容器網絡接入到docker0網絡上centos
2.三、joined container A:聯盟式網絡,讓兩個容器有一部分名稱空間隔離(User、Mount、Pid),這樣兩個容器間就擁有同一個網絡接口,網絡協議棧
2.四、Open container:開放式網絡:直接共享物理機的三個名稱空間(UTS、IPC、Net),世界使用物理主機的網卡通訊,賦予容器管理物理主機網絡的特權
2、Docker網絡的指定
一、bridge網絡(NAT)
docker在安裝完之後自動提供了3種網絡,默認使用bridge(nat橋接)網絡,若是啓動容器時,不指定--network=string,就是用的bridge網絡,使用docker network ls能夠看到這三種網絡類型
[root@bogon ~]# docker network ls NETWORK ID NAME DRIVER SCOPE ea9de27d788c bridge bridge local 126249d6b177 host host local 4ad67e37d383 none null local
docker在安裝完成後,會自動在本機建立一個軟交換機(docker0),能夠扮演二層的交換機設備,也能夠扮演二層的網卡設備
[root@bogon ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
當咱們在建立容器時,docker會經過軟件自動建立2個虛擬的網卡,一端接在容器上,另外一端接在docker0交換機上,從而使得容器就好像鏈接在了交換機上。
這是我尚未啓動容器以前本地host的網絡信息
[root@bogon ~]# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:2fff:fe51:412d prefixlen 64 scopeid 0x20<link> ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14 bytes 1758 (1.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.31.186 netmask 255.255.255.0 broadcast 192.168.31.255 inet6 fe80::a3fa:7451:4298:fe76 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:fb:f6:a1 txqueuelen 1000 (Ethernet) RX packets 2951 bytes 188252 (183.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 295 bytes 36370 (35.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 96 bytes 10896 (10.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 96 bytes 10896 (10.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:1a:be:ae txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# [root@bogon ~]# [root@bogon ~]#
下面我啓動兩個容器,查看網絡信息的變化,能夠看到多出來兩個vethf的虛擬網卡
這就是docker爲容器啓動建立的一對虛擬網卡中的一半
[root@bogon ~]# docker container run --name=nginx1 -d nginx:stable 11b031f93d019640b1cd636a48fb9448ed0a7fc6103aa509cd053cbbf8605e6e [root@bogon ~]# docker container run --name=redis1 -d redis:4-alpine fca571d7225f6ce94ccf6aa0d832bad9b8264624e41cdf9b18a4a8f72c9a0d33 [root@bogon ~]# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:2fff:fe51:412d prefixlen 64 scopeid 0x20<link> ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14 bytes 1758 (1.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.31.186 netmask 255.255.255.0 broadcast 192.168.31.255 inet6 fe80::a3fa:7451:4298:fe76 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:fb:f6:a1 txqueuelen 1000 (Ethernet) RX packets 2951 bytes 188252 (183.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 295 bytes 36370 (35.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 96 bytes 10896 (10.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 96 bytes 10896 (10.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth0a95d3a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::cc12:e7ff:fe27:2c7f prefixlen 64 scopeid 0x20<link> ether ce:12:e7:27:2c:7f txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethf618ec3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::882a:aeff:fe73:f6df prefixlen 64 scopeid 0x20<link> ether 8a:2a:ae:73:f6:df txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22 bytes 2406 (2.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:1a:be:ae txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# [root@bogon ~]#
另外一半在容器中
[root@bogon ~]# docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fca571d7225f redis:4-alpine "docker-entrypoint.s?? About a minute ago Up About a minute 6379/tcp redis1 11b031f93d01 nginx:stable "nginx -g 'daemon of?? 10 minutes ago Up 10 minutes 80/tcp nginx1
而且他們都被關聯到了docker0虛擬交換機中,可使用brctl和ip link show查看到
[root@bogon ~]# brctl show bridge name bridge id STP enabled interfaces docker0 8000.02422f51412d no veth0a95d3a vethf618ec3 [root@bogon ~]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:fb:f6:a1 brd ff:ff:ff:ff:ff:ff 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:1a:be:ae brd ff:ff:ff:ff:ff:ff 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:1a:be:ae brd ff:ff:ff:ff:ff:ff 5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:2f:51:41:2d brd ff:ff:ff:ff:ff:ff 7: vethf618ec3@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 8a:2a:ae:73:f6:df brd ff:ff:ff:ff:ff:ff link-netnsid 0 9: veth0a95d3a@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether ce:12:e7:27:2c:7f brd ff:ff:ff:ff:ff:ff link-netnsid 1
能夠看到,vethf虛擬網卡後面還有一半「@if6和@if8」,這兩個就是在容器中的虛擬網卡
bridge0是一個nat橋,所以docker在啓動容器後,還會自動爲容器生成一個iptables規則
[root@bogon ~]# iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 43 packets, 3185 bytes) pkts bytes target prot opt in out source destination 53 4066 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 53 4066 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 53 4066 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 3 packets, 474 bytes) pkts bytes target prot opt in out source destination 24 2277 OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 3 packets, 474 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 2 267 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24 0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24 22 2010 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 22 2010 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 22 2010 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT_direct (1 references) pkts bytes target prot opt in out source destination Chain POSTROUTING_ZONES (1 references) pkts bytes target prot opt in out source destination 12 953 POST_public all -- * ens33 0.0.0.0/0 0.0.0.0/0 [goto] 10 1057 POST_public all -- * + 0.0.0.0/0 0.0.0.0/0 [goto] Chain POSTROUTING_ZONES_SOURCE (1 references) pkts bytes target prot opt in out source destination Chain POSTROUTING_direct (1 references) pkts bytes target prot opt in out source destination Chain POST_public (2 references) pkts bytes target prot opt in out source destination 22 2010 POST_public_log all -- * * 0.0.0.0/0 0.0.0.0/0 22 2010 POST_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0 22 2010 POST_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POST_public_allow (1 references) pkts bytes target prot opt in out source destination Chain POST_public_deny (1 references) pkts bytes target prot opt in out source destination Chain POST_public_log (1 references) pkts bytes target prot opt in out source destination Chain PREROUTING_ZONES (1 references) pkts bytes target prot opt in out source destination 53 4066 PRE_public all -- ens33 * 0.0.0.0/0 0.0.0.0/0 [goto] 0 0 PRE_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto] Chain PREROUTING_ZONES_SOURCE (1 references) pkts bytes target prot opt in out source destination Chain PREROUTING_direct (1 references) pkts bytes target prot opt in out source destination Chain PRE_public (2 references) pkts bytes target prot opt in out source destination 53 4066 PRE_public_log all -- * * 0.0.0.0/0 0.0.0.0/0 53 4066 PRE_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0 53 4066 PRE_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0 Chain PRE_public_allow (1 references) pkts bytes target prot opt in out source destination Chain PRE_public_deny (1 references) pkts bytes target prot opt in out source destination Chain PRE_public_log (1 references) pkts bytes target prot opt in out source destination
其中在POSTROUTING的chain上,有一個「MASQUERADE」從任何地址進入,只要不從docker0出去,原地址是172.17網段,到任何地址去的數據,都將被地址轉換,snat
上面提到過,當docker使用nat網絡時,僅僅只有當前docker host和當前docker host上的容器之間能夠互相訪問,那麼不一樣主機的容器要進行通訊,就必需要進行dnat(端口映射的方式),且同一個端口只能映射一個服務,那麼在這個docker host中若是有多個web服務,就只能映射到一個80端口,其餘的web服務就只能改默認端口,這也爲咱們帶來了很大的侷限性。
1.一、使用ip命令操做net名稱空間
因爲docker的Net、UTS以及IPC是能夠被容器共享的,因此可以構建出一個此前在KVM的虛擬化網絡中所謂的隔離式網絡、橋接式網絡、NET式網絡、物理橋式網絡初次以外所不具備的特殊網絡模型,咱們能夠用ip命令手動去操做網絡名稱空間的,ip命令所能操做的衆多對象當中包括netns
查詢是否安裝ip命令
[root@bogon ~]# rpm -q iproute iproute-4.11.0-14.el7.x86_64
建立net名稱空間
[root@bogon ~]# ip netns help Usage: ip netns list ip netns add NAME ip netns set NAME NETNSID ip [-all] netns delete [NAME] ip netns identify [PID] ip netns pids NAME ip [-all] netns exec [NAME] cmd ... ip netns monitor ip netns list-id [root@bogon ~]# ip netns add ns1 [root@bogon ~]# ip netns add ns2
若是沒有單獨爲netns建立網卡接口的話,那麼默認就只有一個loop網卡
[root@bogon ~]# ip netns exec ns1 ifconfig -a lo: flags=8<LOOPBACK> mtu 65536 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# ip netns exec ns2 ifconfig -a lo: flags=8<LOOPBACK> mtu 65536 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
建立網卡接口對並放入net名稱空間
[root@bogon ~]# ip link add name veth1.1 type veth peer name veth1.2 [root@bogon ~]# ip link show ... ... 7: veth1.2@veth1.1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 06:9d:b4:1f:96:88 brd ff:ff:ff:ff:ff:ff 8: veth1.1@veth1.2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 22:ac:45:de:61:5d brd ff:ff:ff:ff:ff:ff [root@bogon ~]# ip netns exec ns1 ip link set dev veth1.1 name eth0 [root@bogon ~]# ip netns exec ns2 ip link set dev veth1.2 name eth0 [root@bogon ~]# ip netns exec ns1 ifconfig eth0 10.10.1.1/24 up [root@bogon ~]# ip netns exec ns2 ifconfig eth0 10.10.1.2/24 up [root@bogon ~]# ip netns exec ns1 ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.10.1.1 netmask 255.255.255.0 broadcast 10.10.1.255 inet6 fe80::20ac:45ff:fede:615d prefixlen 64 scopeid 0x20<link> ether 22:ac:45:de:61:5d txqueuelen 1000 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# ip netns exec ns2 ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.10.1.2 netmask 255.255.255.0 broadcast 10.10.1.255 inet6 fe80::49d:b4ff:fe1f:9688 prefixlen 64 scopeid 0x20<link> ether 06:9d:b4:1f:96:88 txqueuelen 1000 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# ip netns exec ns1 ping 10.10.1.2 PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data. 64 bytes from 10.10.1.2: icmp_seq=1 ttl=64 time=0.261 ms 64 bytes from 10.10.1.2: icmp_seq=2 ttl=64 time=0.076 ms
這樣就完成了ip命令建立netns並設置網卡接口的配置
二、Host網絡
從新啓動一個容器,指定--network爲host網絡
[root@bogon ~]# docker container run --name=myhttpd --network=host -d httpd:1.1 17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769 [root@bogon ~]# [root@bogon ~]# ip netns list ns2 ns1
使用交互模式鏈接到容器內部,查看網絡信息
能夠看到,這個容器使用的網絡和物理主機的如出一轍。注意:在這個容器內部更改網絡信息,就和改物理主機的網絡信息是同等的。
[root@bogon ~]# docker container exec -it myhttpd /bin/sh sh-4.1# sh-4.1# ifconfig docker0 Link encap:Ethernet HWaddr 02:42:2F:51:41:2D inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:2fff:fe51:412d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:1758 (1.7 KiB) ens33 Link encap:Ethernet HWaddr 00:0C:29:FB:F6:A1 inet addr:192.168.31.186 Bcast:192.168.31.255 Mask:255.255.255.0 inet6 addr: fe80::a3fa:7451:4298:fe76/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:30112 errors:0 dropped:0 overruns:0 frame:0 TX packets:2431 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1927060 (1.8 MiB) TX bytes:299534 (292.5 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:96 errors:0 dropped:0 overruns:0 frame:0 TX packets:96 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10896 (10.6 KiB) TX bytes:10896 (10.6 KiB) veth0a95d3a Link encap:Ethernet HWaddr CE:12:E7:27:2C:7F inet6 addr: fe80::cc12:e7ff:fe27:2c7f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:648 (648.0 b) virbr0 Link encap:Ethernet HWaddr 52:54:00:1A:BE:AE inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) sh-4.1# ping www.baidu.com PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data. 64 bytes from 61.135.169.125: icmp_seq=1 ttl=46 time=6.19 ms 64 bytes from 61.135.169.125: icmp_seq=2 ttl=46 time=6.17 ms 64 bytes from 61.135.169.125: icmp_seq=3 ttl=46 time=6.11 ms
使用inspect也能夠看到該容器的網絡信息使用的是host
sh-4.1# exit exit [root@bogon ~]# docker container inspect myhttpd [ { "Id": "17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769", "Created": "2018-11-03T13:29:08.34016135Z", "Path": "/usr/sbin/apachectl", "Args": [ " -D", "FOREGROUND" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 4015, "ExitCode": 0, "Error": "", "StartedAt": "2018-11-03T13:29:08.528631643Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bbffcf779dd42e070d52a4661dcd3eaba2bed898bed8bbfe41768506f063ad32", "ResolvConfPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/resolv.conf", "HostnamePath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/hostname", "HostsPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/hosts", "LogPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769-json.log", "Name": "/myhttpd", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "host", "PortBindings": {}, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "shareable", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/asound", "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa-init/diff:/var/lib/docker/overlay2/619fd02d3390a6299f2bb3150762a765dd68bada7f432037769778a183d94817/diff:/var/lib/docker/overlay2/fd29d7fada3334bf5dd4dfa4f38db496b7fcbb3ec070e07fe21124a4f143b85a/diff", "MergedDir": "/var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/merged", "UpperDir": "/var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/diff", "WorkDir": "/var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/work" }, "Name": "overlay2" }, "Mounts": [], "Config": { "Hostname": "bogon", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "5000/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": [ "/usr/sbin/apachectl", " -D", "FOREGROUND" ], "ArgsEscaped": true, "Image": "httpd:1.1", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "91444230e357927973371cb315b9a247463320beffcde3b56248fa840bd24547", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/default", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "host": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "126249d6b1771dc8aeab4aa3e75a2f3951cc765f6a43c4d0053d77c8e8f23685", "EndpointID": "b87ae83df3424565b138c9d9490f503b9632d3369ed01036c05cd885e902f8ca", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "DriverOpts": null } } } } ]
三、none網絡
[root@bogon ~]# docker container run --name=myhttpd2 --network=none -d httpd:1.1 3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3 [root@bogon ~]# [root@bogon ~]# docker container exec -it myhttpd2 /bin/sh sh-4.1# ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
使用inspect查看詳細信息,能夠看到,網絡信息變成了none。
sh-4.1# exit exit [root@bogon ~]# docker container inspect myhttpd2 [ { "Id": "3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3", "Created": "2018-11-03T13:37:53.153680433Z", "Path": "/usr/sbin/apachectl", "Args": [ " -D", "FOREGROUND" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 4350, "ExitCode": 0, "Error": "", "StartedAt": "2018-11-03T13:37:53.563817908Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bbffcf779dd42e070d52a4661dcd3eaba2bed898bed8bbfe41768506f063ad32", "ResolvConfPath": "/var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/resolv.conf", "HostnamePath": "/var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/hostname", "HostsPath": "/var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/hosts", "LogPath": "/var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3-json.log", "Name": "/myhttpd2", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "none", "PortBindings": {}, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "shareable", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/asound", "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0-init/diff:/var/lib/docker/overlay2/619fd02d3390a6299f2bb3150762a765dd68bada7f432037769778a183d94817/diff:/var/lib/docker/overlay2/fd29d7fada3334bf5dd4dfa4f38db496b7fcbb3ec070e07fe21124a4f143b85a/diff", "MergedDir": "/var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/merged", "UpperDir": "/var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/diff", "WorkDir": "/var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/work" }, "Name": "overlay2" }, "Mounts": [], "Config": { "Hostname": "3e7148946653", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "5000/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": [ "/usr/sbin/apachectl", " -D", "FOREGROUND" ], "ArgsEscaped": true, "Image": "httpd:1.1", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "f9402b5b2dbb95c2736f25626704dec79f75800c33c0905c362e79af3810234d", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/f9402b5b2dbb", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "none": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "4ad67e37d38389253ca55c39ad8d615cef40c6bb9b535051679b2d1ed6cb01e8", "EndpointID": "83913b6eaeed3775fbbcbb9375491dd45e527d81837048cffa63b3064ad6e7e3", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "DriverOpts": null } } } } ]
四、Joined 網絡
joined網絡是聯合其餘容器啓動,使用共同的NET、UTS和IPC名稱空間,可是其他名稱空間是不共享的
啓動兩個容器,而且第二個容器使用第一個容器的網絡名稱空間
[root@bogon ~]# docker container run --name myhttpd -d httpd:1.1 7053b88aacb35d859e00d47133c084ebb9288ce3fb47b6c588153a5e6c6dd5f0 [root@bogon ~]# docker container run --name myhttpd1 -d --network container:myhttpd redis:4-alpine 99191b8fc853f546f3b381d36cc2f86bc7f31af31daf0e19747411d2f1a10686 [root@bogon ~]# docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 99191b8fc853 redis:4-alpine "docker-entrypoint.s?? 5 seconds ago Up 3 seconds myhttpd1 7053b88aacb3 httpd:1.1 "/usr/sbin/apachectl?? 3 minutes ago Up 3 minutes 5000/tcp myhttpd [root@bogon ~]#
登陸第一個容器開始驗證
[root@bogon ~]# docker container exec -it myhttpd /bin/sh sh-4.1# sh-4.1# sh-4.1# ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 b) TX bytes:0 (0.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) sh-4.1# ps -ef |grep httpd root 7 1 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND apache 8 7 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND apache 9 7 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND sh-4.1# mkdir /tmp/testdir sh-4.1# sh-4.1# ls /tmp/ testdir
登陸第二個容器驗證
[root@bogon ~]# docker container exec -it myhttpd1 /bin/sh /data # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) /data # ps -ef |grep redis 1 redis 0:00 redis-server /data # ls /tmp/ /data #
能夠看到,在容器httpd上建立了一個目錄,可是容器httpd1上並無,使用的ip地址也是同一個。由此看出,joined網絡是隔離mount、user以及pid名稱空間可是共享同一組net、ipc和uts名稱空間的
3、啓動容器並設置網絡相關配置
3.一、指定容器的HOSTNAME
容器啓動後默認使用的是容器ID做爲hostname,須要指定hostname須要在容器啓動時加上參數
hostname參數會爲容器設置指定的hostname,而且會自動在hosts文件中添加本地解析
[root@bogon ~]# docker container run --name mycentos -it centos:6.6 /bin/sh sh-4.1# hostname 02f68247b097 [root@bogon ~]# docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 02f68247b097 centos:6.6 "/bin/sh" 15 seconds ago Exited (0) 7 seconds ago mycentos [root@bogon ~]# docker container run --name mycentos --hostname centos1.local -it centos:6.6 /bin/sh sh-4.1# sh-4.1# hostname centos1.local sh-4.1# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 centos1.local centos1
3.二、指定容器的DNS
若是不指定DNS容器啓動默認使用的是宿主機配置的DNS地址
[root@bogon ~]# cat /etc/resolv.conf # Generated by NetworkManager nameserver 202.106.196.115 [root@bogon ~]# docker container run --name mycentos -it centos:6.6 /bin/sh sh-4.1# cat /etc/resolv.conf # Generated by NetworkManager nameserver 202.106.196.115 [root@bogon ~]# docker container run --name mycentos --dns 114.114.114.114 -it --rm centos:6.6 /bin/sh sh-4.1# sh-4.1# cat /etc/resolv.conf nameserver 114.114.114.114
3.三、手動添加hosts本地解析
[root@bogon ~]# docker container run --name mycentos --rm --add-host bogon:192.168.31.186 --add-host www.baidu.com:1.1.1.1 -it centos:6.6 /bin/sh sh-4.1# sh-4.1# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.31.186 bogon 1.1.1.1 www.baidu.com 172.17.0.2 ea40852f5871
3.四、容器的端口暴漏
若是容器使用的網絡是bridge,而且容器內部的服務須要讓外部客戶端訪問。那麼就須要作容器內端口暴漏
一、動態暴漏(將指定容器內的端口映射至物理主機的全部地址中的一個動態端口)
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 80 httpd:1.1 54c1b69f4a8b28abc8d65d836d3ed1ae916d982947800da5bace2fa41d2a0ce5 [root@bogon ~]# [root@bogon ~]# curl 172.17.0.2 <h1>Welcom To My Httpd</h1> [root@bogon ~]# iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 17 packets, 1582 bytes) pkts bytes target prot opt in out source destination 34 3134 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 34 3134 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 34 3134 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 1 52 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 3 packets, 310 bytes) pkts bytes target prot opt in out source destination 28 2424 OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 3 packets, 310 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 2 267 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24 0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24 26 2157 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 26 2157 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 26 2157 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.2:80 [root@bogon ~]# docker port myhttpd 80/tcp -> 0.0.0.0:32768
能夠看到,啓動容器指定了端口暴漏會自動在docker host上建立一個iptables的nat規則,主機的全部地址32768端口映射到了容器的80端口上
在另外一臺主機上訪問容器的httpd服務
[root@centos7-node2 ~]# curl 192.168.31.186:32768 <h1>Welcom To My Httpd</h1> [root@centos7-node2 ~]#
二、靜態暴漏(若是想映射主機的指定地址)
2.一、暴漏到物理主機指定地址的隨機端口上
若是不指定物理主機端口,使用兩個冒號分隔,主機地址::容器端口
[root@bogon ~]# docker container run --name myhttpd -d -p 192.168.31.186::80 httpd:1.1 50f3788eefe1016b9df2a3f2fcc1bfa19a2110675396daed075d1d4d0e69798b [root@bogon ~]# docker port myhttpd 80/tcp -> 192.168.31.186:32768 [root@bogon ~]# [root@centos7-node2 ~]# curl 192.168.31.186:32768 <h1>Welcom To My Httpd</h1>
2.二、暴漏到物理主機全部地址的指定端口上
若是不指定地址,能夠將地址省略,主機端口:容器端口
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 80:80 httpd:1.1 2fde0e49c3545fb28624b01b737b22650ba98dfa09674e8ccb3b6722c7dcd257 [root@bogon ~]# docker port myhttpd 80/tcp -> 0.0.0.0:80
2.三、暴漏到物理主機指定地址的指定端口
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 192.168.31.186:8080:80 httpd:1.1 a9152173fafc650c47c6a35040e0c50876f841756529334cb509fdff53ce60c7 [root@bogon ~]# [root@bogon ~]# docker port myhttpd 80/tcp -> 192.168.31.186:8080
4、自定義docker配置文件
一、能夠經過修改docker的配置文件修改默認的docker0網橋地址信息
[root@bogon ~]# vim /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"], "bip": "10.10.1.2/16" } [root@bogon ~]# systemctl restart docker.service [root@bogon ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.10.1.2 netmask 255.255.0.0 broadcast 10.10.255.255 inet6 fe80::42:cdff:fef5:e3ba prefixlen 64 scopeid 0x20<link> ether 02:42:cd:f5:e3:ba txqueuelen 0 (Ethernet) RX packets 38 bytes 3672 (3.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 53 bytes 5152 (5.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
配置文件中只須要加入bip便可,網關等信息docker會自動算出來。能夠看到,docker0的ip地址已經變成了剛纔咱們改的網段
二、修改docker監聽方式
docker默認只監聽在socket文件上,若是要監聽tcp,須要更改配置文件
[root@bogon ~]# vim /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"], "bip": "10.10.1.2/16", "hosts": ["tcp://0.0.0.0:33333", "unix:///var/run/docker.sock"] } [root@bogon ~]# systemctl restart docker.service [root@bogon ~]# netstat -tlunp |grep 33333 tcp6 0 0 :::33333 :::* LISTEN 6621/dockerd [root@bogon ~]#
修改後,就能夠從其餘安裝了docker的主機上遠程訪問這臺docker服務器了
[root@centos7-node2 ~]# docker -H 192.168.31.186:33333 image ls REPOSITORY TAG IMAGE ID CREATED SIZE httpd 1.1 bbffcf779dd4 4 days ago 264MB nginx stable ecc98fc2f376 3 weeks ago 109MB centos 6.6 4e1ad2ce7f78 3 weeks ago 203MB redis 4-alpine 05097a3a0549 4 weeks ago 30MB [root@centos7-node2 ~]# docker -H 192.168.31.186:33333 container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a8409019e310 redis:4-alpine "docker-entrypoint.s?? 2 hours ago Exited (0) 29 minutes ago redis2 99191b8fc853 redis:4-alpine "docker-entrypoint.s?? 3 hours ago Exited (0) 29 minutes ago myhttpd1 7053b88aacb3 httpd:1.1 "/usr/sbin/apachectl?? 3 hours ago Exited (137) 28 minutes ago myhttpd [root@centos7-node2 ~]#
三、爲docker建立網絡
docker支持的網絡模型有bridge、none、host、macvlan、overlay,建立時不指定默認建立bridge網橋
[root@bogon ~]# docker info Containers: 3 Running: 0 Paused: 0 Stopped: 3 Images: 4 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay
建立自定義網絡
[root@bogon ~]# docker network create --driver bridge --subnet 192.168.30.0/24 --gateway 192.168.30.1 mybridge0 859e5a2975979740575d6365de326e18991db7b70188b7a50f6f842ca21e1d3d [root@bogon ~]# docker network ls NETWORK ID NAME DRIVER SCOPE f23c1f889968 bridge bridge local 126249d6b177 host host local 859e5a297597 mybridge0 bridge local 4ad67e37d383 none null local [root@bogon ~]# ifconfig br-859e5a297597: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.30.1 netmask 255.255.255.0 broadcast 192.168.30.255 inet6 fe80::42:f4ff:feeb:6a16 prefixlen 64 scopeid 0x20<link> ether 02:42:f4:eb:6a:16 txqueuelen 0 (Ethernet) RX packets 5 bytes 365 (365.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29 bytes 3002 (2.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.10.1.2 netmask 255.255.0.0 broadcast 10.10.255.255 inet6 fe80::42:cdff:fef5:e3ba prefixlen 64 scopeid 0x20<link> ether 02:42:cd:f5:e3:ba txqueuelen 0 (Ethernet) RX packets 38 bytes 3672 (3.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 53 bytes 5152 (5.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
啓動容器加入mybridge0網絡
[root@bogon ~]# docker container run --name redis1 --network mybridge0 -d redis:4-alpine 6d6d11266e3208e45896c40e71c6e3cecd9f7710f2f3c39b401d9f285f28c2f7 [root@bogon ~]# docker container exec -it redis1 /bin/sh /data # /data # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:1E:02 inet addr:192.168.30.2 Bcast:192.168.30.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2618 (2.5 KiB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
四、刪除自定義docker網絡
刪除以前要先關閉正在運行在此網絡上的容器
[root@bogon ~]# docker network rm mybridge [root@bogon ~]# docker network ls NETWORK ID NAME DRIVER SCOPE f23c1f889968 bridge bridge local 126249d6b177 host host local 4ad67e37d383 none null local