namespace(命名空間)和cgroup是軟件容器化(想一想Docker)趨勢中的兩個主要內核技術。簡單來講,cgroup是一種對進程進行統一的資源監控和限制,它控制着你可使用多少系統資源(CPU,內存等)。而namespace是對全局系統資源的一種封裝隔離,它經過Linux內核對系統資源進行隔離和虛擬化的特性,限制了您能夠看到的內容。 Linux 3.8內核提供了6種類型的命名空間:Process ID (pid)、Mount (mnt)、Network (net)、InterProcess Communication (ipc)、UTS、User ID (user)。例如,pid命名空間內的進程只能看到同一命名空間中的進程。mnt命名空間,能夠將進程附加到本身的文件系統(如chroot)。在本文中,我只關注網絡命名空間 Network (net)。 網絡命名空間爲命名空間內的全部進程提供了全新隔離的網絡協議棧。這包括網絡接口,路由表和iptables規則。經過使用網絡命名空間就能夠實現網絡虛擬環境,實現彼此之間的網絡隔離,這對於雲計算中租戶網絡隔離很是重要,Docker中的網絡隔離也是基於此實現的,若是你已經理解了上面所說的,那麼咱們進入正題,介紹如何使用網絡命名空間。
#建立兩個測試容器 [root@localhost ~]# docker run -d -it --name busybox_1 busybox /bin/sh -c "while true;do sleep 3600;done" [root@localhost ~]# docker run -d -it --name busybox_2 busybox /bin/sh -c "while true;do sleep 3600;done" [root@localhost ~]# docker ps -a
#查看容器的IP [root@localhost ~]# docker exec -it busybox_1 ip a [root@localhost ~]# docker exec -it busybox_2 ip a [root@localhost ~]# docker exec -it busybox_1 /bin/sh / # ping 172.17.0.3 #進入容器,ping busybox_2的地址 PING 172.17.0.3 (172.17.0.3): 56 data bytes 64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.133 ms 64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.097 ms 64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.124 ms [root@localhost ~]# docker exec -it busybox_2 ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2): 56 data bytes 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.109 ms 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.152 ms
同一臺宿主機上建立的容器默認是可以互相通訊的
[root@localhost ~]# ip netns add test1 [root@localhost ~]# ip netns add test2 [root@localhost ~]# ip netns list test2 test1
[root@localhost ~]# ip netns exec test1 ip link 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 #默認只存在lo接口,而且狀態爲DOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[root@localhost ~]# ip link #發現除了正常系統存在的lo和eth0網絡接口外,還存在其餘三個接口 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:fd:34:4b brd ff:ff:ff:ff:ff:ff 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:23:c0:91:f9 brd ff:ff:ff:ff:ff:ff 105: veth0a87245@if104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether f2:be:7e:3c:02:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
107: vethabf8073@if106: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether d6:dd:20:cf:84:4a brd ff:ff:ff:ff:ff:ff link-netnsid 1
什麼是veth pair?docker
veth pair 不是一個設備,而是一對設備,以鏈接兩個虛擬以太端口。操做veth pair,須要跟namespace一塊兒配合。兩個namespace ns1/ns2 中各有一個tap組成veth pair,兩個tap 上配置的ip進行互ping
簡單的拓撲圖:網絡
建立一對veth pairoop
[root@localhost ~]# ip link add veth-test1 type veth peer name veth-test2 #建立 veth-test1 和 veth-test2 一對pair [root@localhost ~]# ip link #查看是否添加成功 108 109
108: veth-test2@veth-tes1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether be:08:e6:25:be:01 brd ff:ff:ff:ff:ff:ff 109: veth-tes1@veth-test2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether da:97:e4:b1:cc:38 brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# ip link set veth-test1 netns test1 [root@localhost ~]# ip netns exec test1 ip link #查看是否添加成功,狀態爲DOWN 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
109: veth-tes1@if108: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether da:97:e4:b1:cc:38 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@localhost ~]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:fd:34:4b brd ff:ff:ff:ff:ff:ff 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:23:c0:91:f9 brd ff:ff:ff:ff:ff:ff 105: veth0a87245@if104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether f2:be:7e:3c:02:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
107: vethabf8073@if106: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether d6:dd:20:cf:84:4a brd ff:ff:ff:ff:ff:ff link-netnsid 1
108: veth-test2@if109: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether be:08:e6:25:be:01 brd ff:ff:ff:ff:ff:ff link-netnsid 2
[root@localhost ~]# ip link set veth-test2 netns test2 [root@localhost ~]# ip netns exec test2 ip link
[root@localhost ~]# ip netns exec test1 ip addr add 192.168.1.1/24 dev veth-test1 #爲test1網絡名稱空間中veth-test1接口添加IP,地址爲192.168.1.1 [root@localhost ~]# ip netns exec test2 ip addr add 192.168.1.2/24 dev veth-test2 #同上
[root@localhost ~]# ip netns exec test1 ip a #查看test1網絡名稱接口狀態,veth-test1已經綁定了IP,可是狀態爲DOWN 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 109: veth-tes1@if108: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether da:97:e4:b1:cc:38 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet 192.168.1.1/24 scope global veth-tes1 valid_lft forever preferred_lft forever [root@localhost ~]# ip netns exec test2 ip a #同上 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
108: veth-test2@if109: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether be:08:e6:25:be:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.1.2/24 scope global veth-test2 valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec test1 ip link set dev veth-test1 up [root@localhost ~]# ip netns exec test2 ip link set dev veth-test2 up [root@localhost ~]# ip netns exec test1 ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
109: veth-tes1@if108: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether da:97:e4:b1:cc:38 brd ff:ff:ff:ff:ff:ff link-netnsid 1 [root@localhost ~]# ip netns exec test2 ip link 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
108: veth-test2@if109: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether be:08:e6:25:be:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
#網絡通訊測試 [root@localhost ~]# ip netns exec test1 ping 192.168.1.2 PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. 64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.056 ms 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.057 ms [root@localhost ~]# ip netns exec test2 ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.039 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.084 ms