Jumbo Frame(巨幀)linux
IEEE 802.3以太網標準僅規定支持1500Byte的幀MTU,總計1518Byte的幀大小。(使用IEEE 802.1Q VLAN/QoS標籤時,增長至1522Byte)而巨型幀每每採用9000Byte的幀MTU,合計9018/9022Byte的幀大小。windows
目前巨型幀還沒有成爲官方的IEEE 802.3以太網標準的一部分。因此不一樣硬件廠商的設備支持程度可能不盡相同。api
使用巨型幀,增大的有效報文長度提高了帶寬使用效率的提高(以下圖)。與此同時,增加的報文也帶來傳輸時延的增長,時延敏感型數據並不適合使用巨型幀傳輸。網絡
neutron中的MTU配置項app
從配置項的描述總結而言,global_physnet_mtu與physical_network_mtus共同定義了underlay physical network的MTU,path_mtu定義了overlay network的MTU。ide
**調整MTU的3個用例性能
單MTU值物理網絡體系**ui
在neutron.conf中spa
1.[DEFAULT].net
2.global_physnet_mtu = 900
在ml2.ini中
1.[ml2]
2.path_mtu = 9000
該配置定義了全部underlay網絡(flat,vlan)與overlay網絡(vxlan,gre)的MTU值均爲9000。
多MTU值物理網絡體系
在neutron.conf中
1.[DEFAULT]
2.global_physnet_mtu = 9000
在ml2.ini中
1. [ovs]
2. bridge_mappings = provider1:eth1,provider2:eth2,provider3:eth3
3. [ml2]
4. physical_network_mtus = provider2:4000,provider3:1500
5. path_mtu = 9000
該配置定義了underlay網絡provider2的MTU值爲4000,provider3的MTU值爲1500,其餘如provider1的MTU值爲9000。而overlay網絡的MTU值爲9000。
Overlay網絡MTU
在neutron.conf中
2. global_physnet_mtu = 9000
在ml2.ini中
2. path_mtu = 4000
該配置定義了全部underlay網絡MTU值爲9000,overlay網絡的MTU值均爲4000。
代碼淺析
建立network resource時的MTU處理
flat和vlan網絡,根據實際的物理網絡映射與physical_network_mtus、global_physnet_mtu信息,獲取最小可用MTU值。
1. def get_deployment_physnet_mtu():
2. return cfg.CONF.global_physnet_mtu
3.
4. class BaseTypeDriver(api.ML2TypeDriver):
5. def init(self):
6. try:
7. self.physnet_mtus = helpers.parse_mappings(
8. cfg.CONF.ml2.physical_network_mtus, unique_values=False
9. )
10. except Exception as e:
11. LOG.error("Failed to parse physical_network_mtus: %s", e)
12. self.physnet_mtus = []
13.
14. def get_mtu(self, physical_network=None):
15. return p_utils.get_deployment_physnet_mtu()
16.
17. class FlatTypeDriver(helpers.BaseTypeDriver):
18. ...
19. def get_mtu(self, physical_network):
20. seg_mtu = super(FlatTypeDriver, self).get_mtu()
21. mtu = []
22. if seg_mtu > 0:
23. mtu.append(seg_mtu)
24. if physical_network in self.physnet_mtus:
25. mtu.append(int(self.physnet_mtus[physical_network]))
26. return min(mtu) if mtu else 0
27.
28. class VlanTypeDriver(helpers.SegmentTypeDriver):
29. ...
30. def get_mtu(self, physical_network):
31. seg_mtu = super(VlanTypeDriver, self).get_mtu()
32. mtu = []
33. if seg_mtu > 0:
34. mtu.append(seg_mtu)
35. if physical_network in self.physnet_mtus:
36. mtu.append(int(self.physnet_mtus[physical_network]))
37. return min(mtu) if mtu else 0
Geneve,Gre,Vxlan類型網絡,則根據global_physnet_mtu與path_mtu中選取最小的可用MTU值,減去各種型報文頭部開銷,獲取實際可用MTU值。
1. class _TunnelTypeDriverBase(helpers.SegmentTypeDriver):
2. ...
3. def get_mtu(self, physical_network=None):
4. seg_mtu = super(_TunnelTypeDriverBase, self).get_mtu()
5. mtu = []
6. if seg_mtu > 0:
7. mtu.append(seg_mtu)
8. if cfg.CONF.ml2.path_mtu > 0:
9. mtu.append(cfg.CONF.ml2.path_mtu)
10. version = cfg.CONF.ml2.overlay_ip_version
11. ip_header_length = p_const.IP_HEADER_LENGTH[version]
13.
15. ...
def get_mtu(self, physical_network=None):
mtu = super(GeneveTypeDriver, self).get_mtu()
19.
21. ...
23. mtu = super(GreTypeDriver, self).get_mtu(physical_network)
24. return mtu - p_const.GRE_ENCAP_OVERHEAD if mtu else 0
25.
27. ...
def get_mtu(self, physical_network=None):
mtu = super(VxlanTypeDriver, self).get_mtu()
在用戶實際建立network資源時,若未顯式指定網絡MTU值,則使用該網絡類型下系統定義的最大可用MTU。若顯式指定MTU,neutron會檢查用戶定義MTU是否小於等於該網絡類型下系統定義的最大可用MTU。
1. def _get_network_mtu(self, network_db, validate=True):
2. mtus = []
3. ...
4. for s in segments:
5. segment_type = s.get('network_type')
6. if segment_type is None:
7. ...
8. else:
9. mtu = type_driver.get_mtu(s['physical_network'])
11. # then is that for the segment type, MTU has no meaning or
13. if mtu:
14. mtus.append(mtu)
15.
max_mtu = min(mtus) if mtus else p_utils.get_deployment_physnet_mtu()
18.
19. if validate:
20. # validate that requested mtu conforms to allocated segments
21. if net_mtu and max_mtu and max_mtu < net_mtu:
msg = _("Requested MTU is too big, maximum is %d") % max_mtu
24.
25. # if mtu is not set in database, use the maximum possible
虛擬機tap設置MTU
在使用Linux Bridge實現的Neutron網絡中,Linux Bridge Agent在偵測到新的device後,會經過ip link set 操做,根據network中的MTU值,設置虛擬機綁定至Linux Bridge的tap設備的MTU值。反觀Openvswitch實現的網絡中卻沒有相關的設置。實際在使用過程當中須要經過ovs-vsctl set Interface <tap name> mtu_request=<MTU Value>命使人工去設置tap設備的MTU值。
1. class LinuxBridgeManager(amb.CommonAgentManagerBase):
2. def plug_interface(self, network_id, network_segment, tap_name,
3. device_owner):
4. return self.add_tap_interface(network_id, network_segment.network_type,
5. network_segment.physical_network,
6. network_segment.segmentation_id,
7. tap_name, device_owner,
8. network_segment.mtu)
9.
10. def _set_tap_mtu(self, tap_device_name, mtu):
網絡設備tap設置MTU
dhcp和router相關的tap設備在plug時,neutron會根據網絡的MTU,在各tap設備所在的namespace內運行「ip link set <tap name> mtu <MTU value>」設置tap設備的MTU值。
1. class OVSInterfaceDriver(LinuxInterfaceDriver):
3. bridge=None, namespace=None, prefix=None, mtu=None):
4. ...
5. # NOTE(ihrachys): the order here is significant: we must set MTU after
6. # the device is moved into a namespace, otherwise OVS bridge does not
7. # allow to set MTU that is higher than the least of all device MTUs on
8. # the bridge
9. if mtu:
10. self.set_mtu(device_name, mtu, namespace=namespace, prefix=prefix)
11. else:
12. LOG.warning("No MTU configured for port %s", port_id)
13. ...
14.
16. if self.conf.ovs_use_veth:
tap_name = self._get_tap_name(device_name, prefix)
19. tap_name, device_name, namespace2=namespace)
20. root_dev.link.set_mtu(mtu)
else:
23. ns_dev.link.set_mtu(mtu)
24.
class IpLinkCommand(IpDeviceCommandBase):
27. ...
28. def set_mtu(self, mtu_size):
29. self._as_root([], ('set', self.name, 'mtu', mtu_size))
bridge間veth設置MTU
Openstack從J版之後,neutron使用ovs patch port代替了linux veth實現OVS網橋之間的鏈接(出於性能提高的目的)。但依舊保留了veth鏈接的方式。在openvswitch_agent.ini中能夠經過配置use_veth_interconnection=true啓用veth鏈接網橋的功能。若是開啓這項配置,默認的veth_mtu值爲9000。當配置鏈路MTU大於9000時,須要修改openvswitch_agent.ini配置文件中veth_mtu的值,以避免發生瓶頸效應。
1. class OVSNeutronAgent(l2population_rpc.L2populationRpcCallBackTunnelMixin,
2. dvr_rpc.DVRAgentRpcCallbackMixin):
3. def init(self, bridge_classes, ext_manager, conf=None):
4. ...
5. self.use_veth_interconnection = ovs_conf.use_veth_interconnection
6. self.veth_mtu = agent_conf.veth_mtu
7. ...
8. def setup_physical_bridges(self, bridge_mappings):
9. '''''Setup the physical network bridges.
10.
11. Creates physical network bridges and links them to the
12. integration bridge using veths or patch ports.
13.
14. :param bridge_mappings: map physical network names to bridge names.
15. '''
16. self.phys_brs = {}
17. self.int_ofports = {}
18. self.phys_ofports = {}
19. ip_wrapper = ip_lib.IPWrapper()
20. ovs = ovs_lib.BaseOVS()
21. ovs_bridges = ovs.get_bridges()
22. for physical_network, bridge in bridge_mappings.items():
23. ...
24. if self.use_veth_interconnection:
25. # enable veth to pass traffic
26. int_veth.link.set_up()
27. phys_veth.link.set_up()
28. if self.veth_mtu:
29. # set up mtu size for veth interfaces
30. int_veth.link.set_mtu(self.veth_mtu)
31. phys_veth.link.set_mtu(self.veth_mtu)
32. else:
33. # associate patch ports to pass traffic
34. self.int_br.set_db_attribute('Interface', int_if_name,
35. 'options', {'peer': phys_if_name})
36. br.set_db_attribute('Interface', phys_if_name,
37. 'options', {'peer': int_if_name})
虛擬機網卡如何設置MTU
虛擬機內部網卡配置MTU則是經過虛擬機DHCP請求IP地址時,順便請求MTU值。在RFC2132 DHCP Option and BOOTP Vendor Extensions裏明肯定義了Interface MTU Option。DHCP Option Code 26 用兩個字節的MTU數據,定義了網絡接口的MTU值。以下表所示。
在DHCP agent中,dnsmasq的spawn_process會根據network的MTU值調整自身的啓動參數。從而使虛擬機在DHCP過程當中能正確地配置自身網卡的MTU值。
1. class Dnsmasq(DhcpLocalProcess):
2. def _build_cmdline_callback(self, pid_file):
3. # We ignore local resolv.conf if dns servers are specified
4. # or if local resolution is explicitly disabled.
5. ...
6. mtu = getattr(self.network, 'mtu', 0)
7. # Do not advertise unknown mtu
8. if mtu > 0:
9. cmd.append('--dhcp-option-force=option:mtu,%d' % mtu)
10. ...
11. return cmd
探測MTU
經過指定ICMP報文內容size以及IP報文不分片來探測MTU值設置是否正確。注意這裏的size是指icmp data size。該size並不包含ICMP報文頭部長度(8Byte)以及IP頭部長度(20Byte)。
windows下:
1. ping -f -l <size> <target_name/target_ip>
linux下: