Docker已經上市不少年,不是什麼新鮮事物了,不少企業或者開發同窗之前也很少很多有所接觸,可是有實操經驗的人很少,本系列教程主要偏重實戰,儘可能講乾貨,會根據本人理解去作闡述,具體官方概念能夠查閱官方教程,本章目標以下linux
本系列教程導航:
Docker深刻淺出系列 | 容器初體驗
[Docker深刻淺出系列 | 容器初體驗(https://www.cnblogs.com/evan-liang/p/12237400.html)docker
本教程是基於第一章建立的虛擬機、操做系統和Docker演示shell
在同一臺主機內的兩個容器是怎麼通訊的呢?
兩個容器是怎麼作到網絡隔離的呢?
centos
我怎樣才能在服務器外經過瀏覽器訪問到服務器裏的端口爲8080的容器的資源?
瀏覽器
Docker的三種網絡模式是什麼,都有什麼特色呢?
tomcat
在互聯網世界,兩臺主機要進行通訊,是經過兩個網卡/網絡接口鏈接起來,接收和發送數據包都通過網卡,兩個網卡之間至關於創建了一條通訊管道。
bash
1.查看鏈路層網卡信息
ip link show
服務器
[root@10 /]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff
2.顯示應用層網卡信息,能夠看到更多詳細信息,例如ip地址
ip a
網絡
[root@10 /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 57001sec preferred_lft 57001sec inet6 fe80::5054:ff:fe8a:fee6/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff inet 192.168.100.12/24 brd 192.168.100.255 scope global noprefixroute dynamic eth1 valid_lft 143401sec preferred_lft 143401sec inet6 fe80::a00:27ff:feba:a28/64 scope link valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever
3.顯示系統全部的網卡
ls /sys/class/net
[root@10 /]# ls /sys/class/net docker0 eth0 eth1 lo
ip a
核心信息詳解3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff inet 192.168.100.12/24 brd 192.168.100.255 scope global noprefixroute dynamic eth1 valid_lft 143401sec preferred_lft 143401sec inet6 fe80::a00:27ff:feba:a28/64 scope link valid_lft forever preferred_lft forever
<BROADCAST,MULTICAST,UP,LOWER_UP>
這個配置串告訴咱們:
BROADCAST 該接口支持廣播
MULTICAST 該接口支持多播
UP 網絡接口已啓用
LOWER_UP 網絡電纜已插入,設備已鏈接至網絡
其餘配置信息:
mtu 1500 最大傳輸單位(數據包大小)爲1,500字節
qdisc pfifo_fast 用於數據包排隊
state UP 網絡接口已啓用
group default 接口組
qlen 1000 傳輸隊列長度
link/ether 08:00:27:ba:0a:28 接口的 MAC(硬件)地址
brd ff:ff:ff:ff:ff:ff 廣播地址
inet 192.168.100.12/24 綁定的IPv4 地址
brd 192.168.0.255 廣播地址
scope global 全局有效
dynamic eth1 地址是動態分配的
valid_lft 143401sec IPv4 地址的有效使用期限
preferred_lft 143401sec IPv4 地址的首選生存期
inet6 fe80::a00:27ff:feba:a28/64 IPv6 地址
scope link 僅在此設備上有效
valid_lft forever IPv6 地址的有效使用期限
preferred_lft forever IPv6 地址的首選生存期
經過如下命令,能夠查看對應網卡的配置信息,ifcfg-*文件
cat /etc/sysconfig/network-scripts/ifcfg-eth0
這裏能夠修改配置文件ifcfg-*直接複製一份新的ip配置添加,可是我默認網絡用的是動態ip,經過命令會更方便
[root@10 /]# ip addr add 192.168.0.100/24 dev eth0
從下面能夠看到,經過ip a
查看,新的ip已經成功綁定
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 54131sec preferred_lft 54131sec inet 192.168.0.100/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe8a:fee6/64 scope link valid_lft forever preferred_lft forever
能夠經過如下指定,把新增的ip清理掉
ip addr delete 192.168.0.100/24 dev eth0
重啓網卡
service network restart / systemctl restart network
啓動或者關閉網卡
ifup/ifdown eth0 or ip link set eth0 up/down
從上文咱們能夠知道,兩個主機通訊是經過兩個網卡鏈接起來,那在同一個Linux系統,怎麼模擬多個網絡環境,兩個容器是怎麼作到網絡隔離呢?
在Linux系統中,是經過network namespace來進行網絡隔離,Docker也是利用該技術,建立一個徹底隔離的新網絡環境,這個環境包括一個獨立的網卡空間,路由表,ARP表,ip地址表,iptables,ebtables,等等。總之,與網絡有關的組件都是獨立的。
ip
命令提供了ip netns exec
子命令能夠在對應的network namesapce進行操做,要執行的能夠是任何命令,不僅是和網絡相關的,建立network namesapce後能夠經過ip ntns exec
+namespace名+shell指令進行操做。例如在對應namespace查看網卡信息ip ntns exec nsn1 ip a
經常使用的一些network namespace指令以下:
ip netns list #查看network namespace ip netns add ns1 #添加network namespace ip netns delete ns1 #刪除network namespace
1.建立network namespace - ns1
[root@10 /]# ip netns add ns1
2.查看ns1網卡信息
[root@10 /]# ip netns exec ns1 ip a 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
能夠看到這時候網卡狀態是Down,此時只有一個lo網卡
3.啓動ns1上的lo網卡
[root@10 /]# ip netns exec ns1 ifup lo
4.查看網卡狀態
[root@10 /]# ip netns exec ns1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state unk group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
此時網卡狀態已經變成UNKOWN,lo網卡上也綁定了一個本地迴環地址127.0.0.1/8
重複上面步驟,按照上面的指令把ns1改成ns2執行一邊,最後能夠看到網卡狀態也爲UNKOWN
[root@10 /]# ip netns exec ns2 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
通過上面的一系列步驟,兩個network namespace的網絡結構以下:
此時兩個Network Namespace只有lo設備,互相之間是沒有關聯,沒法通訊,只能同一Network Namesapce內的應用訪問。
Linux提供Virtual Ethernet Pair
技術,分別在兩個namespace創建一對網絡接口,相似在兩個name space之間創建一條pipe,好像拉一條網線同樣,讓彼此之間能夠通訊,簡稱veth pair
。
veth pair
是成對出現,刪除其中一個,另外一個也會自動消失。
1.建立連一對veth虛擬網卡,相似pipe,發給veth-ns1的數據包veth-ns2那邊會收到,發給veth2的數據包veth0會收到。就至關於給機器安裝了兩個網卡,而且之間用網線鏈接起來了,兩個虛擬網卡分別爲veth-ns一、veth-ns2
[root@10 /]# ip link add veth-ns1 type veth peer name veth-ns2
2.查看link,能夠看到鏈接已經創建
[root@10 /]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff 5: veth-ns2@veth-ns1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 2a:96:f4:e3:00:d2 brd ff:ff:ff:ff:ff:ff 6: veth-ns1@veth-ns2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether da:89:0e:56:03:3f brd ff:ff:ff:ff:ff:ff
3.把兩個虛擬網卡分別加到兩個network name space ns一、ns2中
[root@10 /]# ip link set veth-ns1 netns ns1 [root@10 /]# ip link set veth-ns2 netns ns2
4.分別查看宿主機器和兩個network namesapce的狀況
ip link ip netns exec ns1 ip link ip netns exec ns2 ip link
ns1上的虛擬網卡信息
6: veth-ns1@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether da:89:0e:56:03:3f brd ff:ff:ff:ff:ff:ff link-netnsid 1
ns2上的虛擬網卡信息
5: veth-ns2@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 2a:96:f4:e3:00:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
經過上面指令執行結果能夠看到原來宿主機的一對虛擬網卡已經移到兩個network namespace那,當前的網卡狀態仍是DOWN,兩個網卡的序列號是按順序成對的@if五、@if6
5.分別啓動這兩個虛擬網卡
[root@10 /]# ip netns exec ns1 ip link set veth-ns1 up [root@10 /]# ip netns exec ns2 ip link set veth-ns2 up
執行結果分別以下:
6: veth-ns1@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether da:89:0e:56:03:3f brd ff:ff:ff:ff:ff:ff link-netnsid 1
5: veth-ns2@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 2a:96:f4:e3:00:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
能夠看到此時兩個虛擬網卡狀態爲UP,都已經啓動,但尚未IP地址,所以還不能通訊
6.給虛擬網卡配置IP地址
[root@10 /]# ip netns exec ns1 ip addr add 192.168.0.11/24 dev veth-ns1 [root@10 /]# ip netns exec ns2 ip addr add 192.168.0.12/24 dev veth-ns2
7.經過ip a
查看ip地址是否配置成功
root@10 /]# ip netns exec ns1 ip a
root@10 /]# ip netns exec ns2 ip a
8.測試ns1和ns2是否能夠互相連通
[root@10 /]# ip netns exec ns1 ping 192.168.0.12 PING 192.168.0.12 (192.168.0.12) 56(84) bytes of data. 64 bytes from 192.168.0.12: icmp_seq=1 ttl=64 time=0.048 ms
[root@10 /]# ip netns exec ns2 ping 192.168.0.11 PING 192.168.0.11 (192.168.0.11) 56(84) bytes of data. 64 bytes from 192.168.0.11: icmp_seq=1 ttl=64 time=0.041 ms 64 bytes from 192.168.0.11: icmp_seq=2 ttl=64 time=0.039 ms 64 bytes from 192.168.0.11: icmp_seq=3 ttl=64 time=0.041 ms
這時候,兩個network namespace已經成功連通了
以上的veth pair
只能解決兩個namesapce之間的通訊問題,可是多個namesapce是不能直接互相通訊,由於處於不一樣的網絡,在平常生活中,咱們會用到交換機去鏈接不一樣的網絡,而在Linux,咱們能夠經過bridege去實現。
在上圖能夠看到,這時候連個namespace ns三、ns4並非直接經過veth pair
鏈接,而是經過bridge間接鏈接,接下來我會一步步教你們怎麼按照上圖設置
1.建立Network Space
爲了不跟上面混淆,咱們從新建立新的namespace
[root@10 /]# ip netns add ns3 [root@10 /]# ip netns add ns4 [root@10 /]# ip netns add bridge
2.建立一對veth pair
網卡
[root@10 /]# ip link add type veth
3.查看宿主機上生成的一對虛擬網卡
[root@10 /]# ip link ... 7: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 36:8e:bc:43:f0:4a brd ff:ff:ff:ff:ff:ff 8: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 92:c0:44:18:64:93 brd ff:ff:ff:ff:ff:ff
能夠看到,宿主機已經存在一對網卡veth一、veth0
4.讓veth0加入到ns3中,讓veth1加入到bridge中,並分別把虛擬網卡重命名爲ns3-bridge、bridge-ns3
[root@10 /]# ip link set dev veth0 name ns3-bridge netns ns3 [root@10 /]# ip link set dev veth1 name bridge-ns3 netns bridge
5.在建立一對veth pair
,讓veth0加入到ns4中,讓veth1加入到bridge中,並分別把虛擬網卡重命名爲ns4-bridge、bridge-ns4
[root@10 /]# ip link add type veth [root@10 /]# ip link set dev veth0 name ns4-bridge netns ns4 [root@10 /]# ip link set dev veth1 name bridge-ns4 netns bridge
6.分別進入各個namespace查看網卡信息
[root@10 /]# ip netns exec ns4 ip a ... 9: ns4-bridge@if10: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether ea:53:ea:e6:2e:2e brd ff:ff:ff:ff:ff:ff link-netnsid 1
[root@10 /]# ip netns exec ns3 ip a ... 7: ns3-bridge@if8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 36:8e:bc:43:f0:4a brd ff:ff:ff:ff:ff:ff link-netnsid 1
[root@10 /]# ip netns exec bridge ip a ... 8: bridge-ns3@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 92:c0:44:18:64:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0 10: bridge-ns4@if9: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 9e:f4:57:43:2e:2b brd ff:ff:ff:ff:ff:ff link-netnsid 1
能夠看到ns3與bridge、ns4與bridge的網卡序號是連續的,證實是同一對veth pair
7.在bridge namespace中建立br設備(網橋)
在對bridge進行操做,須要用到bridge-utils
,能夠經過如下命令安裝
yum install bridge-utils
接下來建立br設備
[root@10 /]# ip netns exec bridge brctl addbr br
8.啓動br設備
[root@10 /]# ip netns exec bridge ip link set dev br up
9.啓動bridge中的兩個虛擬網卡
[root@10 /]# ip netns exec bridge ip link set dev bridge-ns3 up [root@10 /]# ip netns exec bridge ip link set dev bridge-ns4 up
10.把bridge中兩個虛擬網卡加入到br設備中
[root@10 /]# ip netns exec bridge brctl addif br bridge-ns3 [root@10 /]# ip netns exec bridge brctl addif br bridge-ns4
11.啓動ns三、ns4中的虛擬網卡,並加入ip
[root@10 /]# ip netns exec ns3 ip link set dev ns3-bridge up [root@10 /]# ip netns exec ns3 ip address add 192.168.0.13/24 dev ns3-bridge [root@10 /]# ip netns exec ns4 ip link set dev ns4-bridge up [root@10 /]# ip netns exec ns4 ip address add 192.168.0.14/24 dev ns4-bridge
12.測試兩個namespace的連通性
[root@10 /]# ip netns exec ns3 ping 192.168.0.14 PING 192.168.0.14 (192.168.0.14) 56(84) bytes of data. 64 bytes from 192.168.0.14: icmp_seq=1 ttl=64 time=0.061 ms 64 bytes from 192.168.0.14: icmp_seq=2 ttl=64 time=0.047 ms 64 bytes from 192.168.0.14: icmp_seq=3 ttl=64 time=0.042 ms
[root@10 /]# ip netns exec ns4 ping 192.168.0.13 PING 192.168.0.13 (192.168.0.13) 56(84) bytes of data. 64 bytes from 192.168.0.13: icmp_seq=1 ttl=64 time=0.046 ms 64 bytes from 192.168.0.13: icmp_seq=2 ttl=64 time=0.076 ms 64 bytes from 192.168.0.13: icmp_seq=3 ttl=64 time=0.081 ms
Docker常見的網絡模式有三種bridge、none和host
Docker網絡相關指令
docker network ls - 查詢可用網絡
docker network create - 新建一個網絡
docker network rm - 移除一個網絡
docker network inspect - 查看一個網絡
docker network connect - 鏈接容器到一個網絡
docker network disconnect - 把容器從一個網絡斷開
bridge是docker容器默認的網絡模式,它的實現原理跟咱們網絡虛擬化的多namespace例子同樣,經過veth pair
和bridge進行間接鏈接,當 Docker 啓動時,會自動在主機上建立一個 docker0 虛擬網橋,其實是 Linux 的一個 bridge,能夠理解爲一個軟件交換機。它會在掛載到它的網口之間進行轉發,建立了在主機和全部容器之間一個虛擬共享網絡
接下來,咱們以上圖tomcat容器爲例,來展現下bridge模式的操做
1.啓動兩個tomcat容器(使用咱們在第一章建立的tomcat鏡像)
[root@10 vagrant]# docker run -d --name tomcat01 -p 8081:8080 tomcat [root@10 /]# docker run -d --name tomcat03 -p 8082:8080 tomcat
2.查看兩個tomcat容器的網絡接口信息
[root@10 /]# docker exec -it tomcat01 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
[root@10 /]# docker exec -it tomcat03 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
能夠看到tomcat01和tomcat03的ip分別爲172.17.0.2/16
、172.17.0.3/16
,而且有一對不連續的基於veth pair
建立的虛擬網卡eth0@if12
、eth0@if16
,很明顯,根據咱們上面學過的內容,這兩個不是成對出現的虛擬網卡不能直接通訊,應該是經過bridge來進行鏈接
3.查看宿主機centos系統的網卡信息,驗證是否存在與tomcat容器對應的虛擬網卡
[root@10 /]# ip a ... 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:12ff:fe9a:b1a7/64 scope link valid_lft forever preferred_lft forever 12: veth068cc5c@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 66:1c:13:cd:b4:78 brd ff:ff:ff:ff:ff:ff link-netnsid 5 inet6 fe80::641c:13ff:fecd:b478/64 scope link valid_lft forever preferred_lft forever 16: veth92816fa@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 0a:cf:a0:8e:78:7f brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::8cf:a0ff:fe8e:787f/64 scope link valid_lft forever preferred_lft forever
果真,咱們能夠看到宿主機上確實存在與兩個tomcat容器相對應的虛擬網卡,而且還有一個虛擬網橋docker0的網絡接口,這種鏈接方法叫bridge模式
4.經過docker network inspect bridge
命令查看下bridge的配置
[root@10 /]# docker network inspect ... "Containers": { "2f3c3081b8bd409334f21da3441f3b457e243293f3180d54cfc12d5902ad4dbc": { "Name": "tomcat03", "EndpointID": "2375535cefdbccd3434d563ef567a1032694bdfb4356876bd9d8c4e07b1f222b", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "172.17.0.3/16", "IPv6Address": "" }, "c13db4614a49c302121e467d8aa8ea4f008ab55f83461430d3dd46e59085937f": { "Name": "tomcat01", "EndpointID": "99a04efa9c7bdb0232f98d25f490682b065de1ce076b31487778fa257552a2ba", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" } },
能夠看到兩個container已經綁定到bridge中
5.分別測試兩個tomcat容器相互連通
[root@10 /]# docker exec -it tomcat01 ping 172.17.0.3 PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data. 64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.054 ms 64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.040 ms 64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.039 ms
[root@10 /]# docker exec -it tomcat03 ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.046 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.042 ms 64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.039 ms
能夠看到這兩個容器能夠互相訪問
6.在容器訪問互聯網網站
docker容器是能夠經過brige與host主機網絡連通,所以能夠間接經過iptables實現NAT轉發進行互聯網訪問,可以讓多個內網用戶經過一個外網地址上網,解決了IP資源匱乏的問題
接下來選擇其中剛纔其中一個tomcat容器進行測試,若是發現不能訪問互聯網,可能須要重啓下docker服務systemctl restart docker
[root@10 /]# docker exec -it tomcat01 curl -I https://www.baidu.com HTTP/1.1 200 OK Accept-Ranges: bytes Cache-Control: private, no-cache, no-store, proxy-revalidate, no-transform Connection: keep-alive Content-Length: 277 Content-Type: text/html Date: Thu, 06 Feb 2020 06:03:48 GMT Etag: "575e1f72-115" Last-Modified: Mon, 13 Jun 2016 02:50:26 GMT Pragma: no-cache Server: bfe/1.0.8.18
從上面返回信息能夠看到,容器成功訪問百度網站
host模式是共享宿主機器網絡,所以使用的是同一個network namespace
1.建立一個容器,命名爲tomcat-host,網絡模式選擇host
[root@10 /]# docker run -d --name tomcat-host --network host tomcat ee3c6d2a5f61caa371088f40bc0c5d11101d12845cdee24466322a323b11ee11
2.查看容器的網絡接口信息會發現跟宿主centos的同樣
[root@10 /]# docker exec -it tomcat-host ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 50028sec preferred_lft 50028sec inet6 fe80::5054:ff:fe8a:fee6/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff inet 192.168.100.12/24 brd 192.168.100.255 scope global noprefixroute dynamic eth1 valid_lft 156886sec preferred_lft 156886sec inet6 fe80::a00:27ff:feba:a28/64 scope link valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:12ff:fe9a:b1a7/64 scope link valid_lft forever preferred_lft forever
3.查看容器的網絡信息,會發現容器沒有被單獨分配ip地址
"Containers": { "ee3c6d2a5f61caa371088f40bc0c5d11101d12845cdee24466322a323b11ee11": { "Name": "tomcat-host", "EndpointID": "53565ff879878bfd10fc5843582577d54eb68b14b29f4b1ff2e213d38e2af7ce", "MacAddress": "", "IPv4Address": "", "IPv6Address": "" } }
上面提到,none網絡模式是有一個獨立的namesapce,默認狀況下沒有任何初始化網絡配置,與外界網絡隔離,須要本身去定製
1.建立容器tomcat-none,並設置網絡爲none
[root@10 /]# docker run -d --name tomcat-none --network none tomcat d90808e0b7455c2f375c3d88fa18a1872b4a03e2112bff3db0b3996d16523b1a
2.查看網絡接口信息
[root@10 /]# docker exec -it tomcat-none ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
這裏只有lo設備,沒有其餘網卡,只有本地迴環ip地址
3.查看docker網絡信息
"Containers": { "d90808e0b7455c2f375c3d88fa18a1872b4a03e2112bff3db0b3996d16523b1a": { "Name": "tomcat-none", "EndpointID": "4ea757bbd108ac783bd1257d33499b7b77cd7ea529d4e6c761923eb596dc446c", "MacAddress": "", "IPv4Address": "", "IPv6Address": "" } }
容器沒有被分配任何地址
咱們能夠建立本身的網絡模式,這裏默認是bridge模式,接下來咱們演示如何讓不一樣網絡的容器連通起來
1.建立一個新網絡,名字爲custom,默認模式是bridge
[root@10 /]# docker network create custom af392e4739d810b2e12219c21f505135537e95ea0afcb5075b3b1a5622a66112
2.查看下當前docker網絡列表
[root@10 /]# docker network ls NETWORK ID NAME DRIVER SCOPE ce20377e3f10 bridge bridge local af392e4739d8 custom bridge local afc6ca3cf515 host host local 94cfa528d194 none null local
3.查看下自定義網絡的一些信息
94cfa528d194 none null local [root@10 /]# docker network inspect custom [ { "Name": "custom", "Id": "af392e4739d810b2e12219c21f505135537e95ea0afcb5075b3b1a5622a66112", "Created": "2020-02-05T23:49:08.321895241Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]
4.建立一個容器tomcat-custom,容器網絡模式爲custom
[root@10 /]# docker run -d --name tomcat-custom --network custom tomcat 2e77115f42e36827646fd6e3abacc0594ff71cd1847f6fbffda28e22fb55e9ea
5.查看tomcat-custom的網絡接口信息
[root@10 /]# docker exec -it tomcat-custom ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0 valid_lft forever preferred_lft forever
6.測試tomcat-custom去鏈接上面咱們建立的tomcat01容器
[root@10 /]# docker exec -it tomcat-custom ping 192.17.0.2 PING 192.17.0.2 (192.17.0.2) 56(84) bytes of data. --- 192.17.0.2 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3001ms
從上面執行結果能夠看出,默認狀況下,tomcat-custom跟tomcat01是不能連通,由於處於不一樣的網絡custom、bridge。
7.把tomcat01加到自定義網絡custom
[root@10 /]# docker network connect custom tomcat01
8.查看當前custom網絡信息
"Containers": { "2e77115f42e36827646fd6e3abacc0594ff71cd1847f6fbffda28e22fb55e9ea": { "Name": "tomcat-custom", "EndpointID": "bf2b94f3b580b9df0ca9f6ce2383198961711d1b3d19d33bbcf578d81157e47f", "MacAddress": "02:42:ac:12:00:02", "IPv4Address": "172.18.0.2/16", "IPv6Address": "" }, "c13db4614a49c302121e467d8aa8ea4f008ab55f83461430d3dd46e59085937f": { "Name": "tomcat01", "EndpointID": "f97305672ae617f207dfef1b3dc250d2b8d6a9ec9b36b1b0115e2456f18c44c6", "MacAddress": "02:42:ac:12:00:03", "IPv4Address": "172.18.0.3/16", "IPv6Address": "" } }
能夠看到兩個容器都已經配置到custom網絡中,tomcat-custom 172.18.0.2/16
,tomcat01 172.18.0.3/16
9.配置完網絡後,在用tomcat01嘗試鏈接tomcat-custom
[root@10 /]# docker exec -it tomcat01 ping 172.18.0.3 PING 172.18.0.3 (172.18.0.3) 56(84) bytes of data. 64 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.032 ms 64 bytes from 172.18.0.3: icmp_seq=2 ttl=64 time=0.080 ms 64 bytes from 172.18.0.3: icmp_seq=3 ttl=64 time=0.055 ms
從執行結果能夠看到,如今tomcat01已經能夠跟tomcat-custom通訊了,由於處於同一個網絡中
10.此時查看centos中的網絡接口信息,能夠看到存在對應的虛擬網卡
[root@10 /]# ip a ... 23: vethc30bd52@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-af392e4739d8 state UP group default link/ether 2e:a1:c8:a2:e5:83 brd ff:ff:ff:ff:ff:ff link-netnsid 5 inet6 fe80::2ca1:c8ff:fea2:e583/64 scope link valid_lft forever preferred_lft forever 25: veth69ea87b@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 92:eb:8f:65:fe:7a brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::90eb:8fff:fe65:fe7a/64 scope link valid_lft forever preferred_lft forever 27: veth068cc5c@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-af392e4739d8 state UP group default link/ether ea:44:90:6c:0d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::e844:90ff:fe6c:d49/64 scope link valid_lft forever preferred_lft forever
當用戶建立了自定義網絡,docker引擎默認會對加入該網絡的容器啓動嵌入式DNS,所以同一網絡的容器能夠互相經過容器名稱進行通訊,避免因下游系統有ip須要從新發布
對於非自定義網絡,具體配置,能夠查閱官網配置容器DNS
1.經過上面咱們建立號的自定義網絡和容器進行測試
[root@10 /]# docker exec -it tomcat01 ping tomcat-custom PING tomcat-custom (172.18.0.2) 56(84) bytes of data. 64 bytes from tomcat-custom.custom (172.18.0.2): icmp_seq=1 ttl=64 time=0.031 ms 64 bytes from tomcat-custom.custom (172.18.0.2): icmp_seq=2 ttl=64 time=0.038 ms 64 bytes from tomcat-custom.custom (172.18.0.2): icmp_seq=3 ttl=64 time=0.040 ms
[root@10 /]# docker exec -it tomcat-custom ping tomcat01 PING tomcat01 (172.18.0.3) 56(84) bytes of data. 64 bytes from tomcat01.custom (172.18.0.3): icmp_seq=1 ttl=64 time=0.031 ms 64 bytes from tomcat01.custom (172.18.0.3): icmp_seq=2 ttl=64 time=0.038 ms 64 bytes from tomcat01.custom (172.18.0.3): icmp_seq=3 ttl=64 time=0.040 ms
能夠看到tomcat01和tomcat-custom能夠互相經過容器名進行通訊
端口映射對於咱們來講不陌生,平時訪問服務,例如tomcat,都是經過ip+服務端口進行訪問,如:localhost:8080,可是容器是寄生在宿主機器上,所以,若是咱們想被外部訪問,還須要映射到宿主的端口上,經過宿主的ip+port進行訪問,以下如圖所示:
接下來咱們來實踐下docker端口映射相關操做
1.建立一個tomcat-port容器,指定宿主端口映射8999
[root@10 /]# docker run -d --name tomcat-port -p 8999:8080 tomcat 0b5b014ae2552b85aff55b385ba20518b38509b5670a95ad9eea09475ea26629
2.進入容器,在容器中訪問
[root@10 /]# docker exec -it tomcat-port curl -i localhost:8080 HTTP/1.1 404 Content-Type: text/html;charset=utf-8 Content-Language: en Content-Length: 713 Date: Thu, 06 Feb 2020 07:43:59 GMT
從上面結果能夠看到,實際上已經訪問成功,可是因爲我沒有配置tomcat管理頁面,因此這裏是報404
3.在centos訪問容器
這時候咱們就須要經過容器ip+port的方式進行訪問
[root@10 /]# curl -I 172.17.0.4:8080 HTTP/1.1 404 Content-Type: text/html;charset=utf-8 Content-Language: en Transfer-Encoding: chunked Date: Thu, 06 Feb 2020 07:49:41 GMT
能夠看到,這裏也訪問成功
4.在主體機器訪問,也就是個人主機MacOS操做系統上訪問
這時候須要用虛擬機上centos IP+映射端口 進行訪問
192:centos7 evan$ curl -I 192.168.100.12:8999 HTTP/1.1 404 Content-Type: text/html;charset=utf-8 Content-Language: en Transfer-Encoding: chunked Date: Thu, 06 Feb 2020 07:52:52 GMT
能夠看到,這裏也能正常訪問
上文講解了Docker的單機網絡通訊原理以及Linux虛擬化技術,最後咱們回顧下一開始的那幾個問題,相信你們心中都已經有答案了
1.同一個主機兩個容器如何通訊?
Docker基於Virtual Ethernet Pair
技術實現了容器之間的通訊,但並不是直接端對端對接,在默認網絡bridge模式下,Docker引擎會分別在每一個容器和宿主網絡創建一對虛擬網卡veth pair
,經過bridge間接實現通訊,經過network namespace實現網絡隔離。
2.怎麼從服務器外訪問容器?
從服務器外訪問容器是經過端口映射方式,訪問的IP地址和端口並不是容器的真實IP和端口,而是宿主機器的端口映射,因此要從服務器外部訪問容器,須要經過宿主機器的IP/域名+宿主端口進行訪問。
3.Docker的三種網絡模式是什麼?
Docker常見的網絡模式有三種bridge、none和host
bridge - Docker的默認網絡模式,有獨立的命名空間,可以支持各類自定義網絡, 以及實現網絡隔離,可以知足容器網絡隔離的要求
host - 容器與主機在相同的網絡命名空間下面,使用相同的網絡協議棧,容器能夠直接使用主機的全部網絡接口,最簡單和最低延遲的模式
none - 使用none模式,Docker容器擁有本身的Network Namespace,可是,並不爲Docker容器進行任何網絡配置,即Docker容器沒有網卡、IP、路由等信息,讓開發者能夠自由按需定製