OS | selinux | firewalld | 網卡 |
---|---|---|---|
CentOS7 | 關閉 | 關閉 | eth0、eth1 |
https://www.kernel.org/doc/Documentation/networking/bonding.txt linux
Linux bonding驅動程序提供了一種用於將多個網絡接口聚合爲單個邏輯「綁定」接口的方法。 綁定接口的行爲取決於模式。 通常來講,模式提供熱備用或負載平衡服務。此外,能夠執行鏈路完整性監視 bash
共7種(0-6)模式。默認是balance-rr(循環) 服務器
Mode0(balance-rr)循環策略:從第一個可用的從屬設備到最後一個從屬設備按順序傳輸數據包。 此模式提供負載平衡和容錯能力。 網絡
Mode1(active-backup)主動-備份策略:綁定中只有一個從屬處於活動狀態。當且僅當活動的從站發生故障時,其餘從站才變爲活動狀態。綁定的MAC地址在外部僅在一個端口(網絡適配器)上可見,以免混淆交換機。 負載均衡
Mode2(balance-xor):爲容錯和負載平衡設置XOR(異或)策略。 使用此方法,接口會將傳入請求的MAC地址與其中一個從NIC的MAC地址進行匹配。 一旦創建了連接,便從第一個可用接口開始依次發出傳輸。 ide
Mode3(broadcast)廣播策略:在全部從屬接口上傳輸全部內容。 此模式提供容錯能力。 測試
Mode4(802.3ad):設置IEEE802.3ad動態連接聚合策略。建立共享相同速度和雙工設置的聚合組。在活動聚合器中的全部從屬上發送和接收。須要符合802.3ad要求的開關 3d
Mode5(balance-tlb):設置傳輸負載平衡(TLB)策略以實現容錯和負載平衡。 根據每一個從接口上的當前負載分配傳出流量。 當前從站接收到傳入流量。 若是接收方從站發生故障,則另外一個從站將接管發生故障的從站的MAC地址。 code
Mode6(balance-alb):設置和活動負載平衡(ALB)策略用於容錯和負載平衡。 包括IPV4流量的發送和接收以及負載平衡。 經過ARP協商實現接收負載平衡 blog
查看已添加的eth0、eth1網卡狀態
[root@CentOS7 ~]# nmcli dev status DEVICE TYPE STATE CONNECTION eth0 ethernet disconnected -- eth1 ethernet disconnected --
1)使用主備(active-backup)模式添加bonding接口
[root@CentOS7 ~]# nmcli con add type bond con-name bond01 ifname bond0 mode active-backup Connection 'bond01' (ca0305ce-110c-4411-a48e-5952a2c72716) successfully added.
2)添加slave接口
[root@CentOS7 ~]# nmcli con add type bond-slave con-name bond01-slave0 ifname eth0 master bond0 Connection 'bond01-slave0' (5dd5a90c-9a2f-4f1d-8fcc-c7f4b333e3d2) successfully added. [root@CentOS7 ~]# nmcli con add type bond-slave con-name bond01-slave1 ifname eth1 master bond0 Connection 'bond01-slave1' (a8989d38-cc0b-4a4e-942d-3a2e1eb8f95b) successfully added.
3)啓動slave接口
[root@CentOS7 ~]# nmcli con up bond01-slave0 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/14) [root@CentOS7 ~]# nmcli con up bond01-slave1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/15)
4)啓動bond接口
[root@CentOS7 ~]# nmcli con up bond01 Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/16)
5)查看bond狀態
[root@CentOS7 ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:08:2a:73 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:08:2a:7d Slave queue ID: 0
6)測試
在另外一臺Linux主機ping本機的bond0接口ip,而後人爲地斷掉eth0網卡,看是否會發生主備切換
查看本機bond0接口ip
[root@CentOS7 ~]# ip ad show dev bond0|sed -rn '3s#.* (.*)/24.*#\1#p' 192.168.8.129 [root@CentOS6 ~]# ping 192.168.8.129 PING 192.168.8.129 (192.168.8.129) 56(84) bytes of data. 64 bytes from 192.168.8.129: icmp_seq=1 ttl=64 time=0.600 ms 64 bytes from 192.168.8.129: icmp_seq=2 ttl=64 time=0.712 ms 64 bytes from 192.168.8.129: icmp_seq=3 ttl=64 time=2.20 ms 64 bytes from 192.168.8.129: icmp_seq=4 ttl=64 time=0.986 ms 64 bytes from 192.168.8.129: icmp_seq=7 ttl=64 time=0.432 ms 64 bytes from 192.168.8.129: icmp_seq=8 ttl=64 time=0.700 ms 64 bytes from 192.168.8.129: icmp_seq=9 ttl=64 time=0.571 ms ^C --- 192.168.8.129 ping statistics --- 9 packets transmitted, 7 received, 22% packet loss, time 8679ms rtt min/avg/max/mdev = 0.432/0.887/2.209/0.562 ms
在另外一臺主機ping的時候,斷掉eth0網卡發現中間丟了兩個包
[root@CentOS7 ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth1 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: down Speed: Unknown Duplex: Unknown Link Failure Count: 4 Permanent HW addr: 00:0c:29:08:2a:73 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 2 Permanent HW addr: 00:0c:29:08:2a:7d Slave queue ID: 0
查看當前的active slave是eth1,說明主備切換成功
nmcli命令配置完後自動生成的配置文件
[root@CentOS7 ~]# cd /etc/sysconfig/network-scripts/ [root@CentOS7 network-scripts]# ls ifcfg-bond* ifcfg-bond01 ifcfg-bond-slave-eth0 ifcfg-bond-slave-eth1 [root@CentOS7 network-scripts]# cat ifcfg-bond01 BONDING_OPTS=mode=active-backup TYPE=Bond BONDING_MASTER=yes PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=dhcp DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=bond01 UUID=e5369ad8-2b8b-4cc1-aca2-67562282a637 DEVICE=bond0 ONBOOT=yes [root@CentOS7 network-scripts]# cat ifcfg-bond-slave-eth0 TYPE=Ethernet NAME=bond01-slave0 UUID=f6ed385e-e1ae-487d-b36a-43b13ac3f84f DEVICE=eth0 ONBOOT=yes MASTER_UUID=e5369ad8-2b8b-4cc1-aca2-67562282a637 MASTER=bond0 SLAVE=yes bond01-slave1的配置文件和此文件基本同樣
broadcast(廣播):數據經過全部端口傳輸
active-backup(主動備份):使用一個端口或連接,而將其餘端口或連接保留爲備份
round-robin(輪詢):數據依次在全部端口上傳輸
loadbalance(負載均衡):具備活動的Tx負載平衡和基於BPF的Tx端口選擇器
lacp:實現802.3ad鏈路聚合控制協議
1)查看已添加的eth0、eth1網卡狀態
[root@CentOS7 ~]# nmcli dev status DEVICE TYPE STATE CONNECTION eth0 ethernet disconnected -- eth1 ethernet disconnected --
2)使用主備(activebackup)模式添加名爲team0的網絡組接口
[root@CentOS7 ~]# nmcli con add type team ifname team0 con-name team0 config '{"runner":{"name":"activebackup"}}' Connection 'team0' (28b4e208-339f-4eb2-ae0f-6b07621e7685) successfully added.
3)添加從屬網絡到名爲team0的網絡組
[root@CentOS7 ~]# nmcli con add type team-slave ifname eth0 con-name team0-slave0 master team0 Connection 'team0-slave0' (3c1b3008-ebeb-4e2d-9790-30111f1e1271) successfully added. [root@CentOS7 ~]# nmcli con add type team-slave ifname eth1 con-name team0-slave1 master team0
4)啓動網絡組和從屬網絡
[root@CentOS7 ~]# nmcli con up team0 [root@CentOS7 ~]# nmcli con up team0-slave0 [root@CentOS7 ~]# nmcli con up team0-slave1 [root@CentOS7 ~]# nmcli dev status DEVICE TYPE STATE CONNECTION team0 team connected team0 eth0 ethernet connected team0-slave0 eth1 ethernet connected team0-slave1
5)查看網絡組狀態
[root@CentOS7 ~]# teamdctl team0 state setup: runner: activebackup ports: eth0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 eth1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth0
6)測試
nmcli命令配置完後自動生成的配置文件
[root@CentOS7 ~]# cd /etc/sysconfig/network-scripts/ [root@CentOS7 network-scripts]# ls ifcfg-team0* [root@CentOS7 network-scripts]# grep -v "^IPV6" ifcfg-team0 TEAM_CONFIG="{\"runner\":{\"name\":\"activebackup\"}}" PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=dhcp DEFROUTE=yes IPV4_FAILURE_FATAL=no NAME=team0 UUID=28b4e208-339f-4eb2-ae0f-6b07621e7685 DEVICE=team0 ONBOOT=yes DEVICETYPE=Team [root@CentOS7 network-scripts]# cat ifcfg-team0-slave0 NAME=team0-slave0 UUID=3c1b3008-ebeb-4e2d-9790-30111f1e1271 DEVICE=eth0 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort [root@CentOS7 network-scripts]# cat ifcfg-team0-slave1 NAME=team0-slave1 DEVICE=eth1 ONBOOT=yes TEAM_MASTER=team0