- Linux Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters¶
- Contents¶
- Identifying Your Adapter¶
- SFP+ Devices with Pluggable Optics¶
- 82599-BASED ADAPTERS¶
- Laser turns off for SFP+ when ifconfig ethX down¶
- 82599-based QSFP+ Adapters¶
- 82598-BASED ADAPTERS¶
- Command Line Parameters¶
- max_vfs¶
- allow_unsupported_sfp¶
Linux Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters¶
Intel 10 Gigabit Linux driver. Copyright(c) 1999-2018 Intel Corporation.
Contents¶
- Identifying Your Adapter
- Command Line Parameters
- Additional Configurations
- Known Issues
- Support
Identifying Your Adapter¶
The driver is compatible with devices based on the following:
- Intel(R) Ethernet Controller 82598
- Intel(R) Ethernet Controller 82599
- Intel(R) Ethernet Controller X520
- Intel(R) Ethernet Controller X540
- Intel(R) Ethernet Controller x550
- Intel(R) Ethernet Controller X552
- Intel(R) Ethernet Controller X553
For information on how to identify your adapter, and for the latest Intel network drivers, refer to the Intel Support website: https://www.intel.com/support
SFP+ Devices with Pluggable Optics¶
82599-BASED ADAPTERS¶
NOTES: — If your 82599-based Intel(R) Network Adapter came with Intel optics or is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel optics and/or the direct attach cables listed below. — When 82599-based SFP+ devices are connected back to back, they should be set to the same Speed setting via ethtool. Results may vary if you mix speed settings.
DUAL RATE 1G/10G SFP+ SR (bailed)
DUAL RATE 1G/10G SFP+ SR (bailed)
DUAL RATE 1G/10G SFP+ SR (bailed)
DUAL RATE 1G/10G SFP+ LR (bailed)
DUAL RATE 1G/10G SFP+ LR (bailed)
DUAL RATE 1G/10G SFP+ LR (bailed)
The following is a list of 3rd party SFP+ modules that have received some testing. Not all modules are applicable to all devices.
SFP+ SR bailed, 10g single rate
SFP+ SR bailed, 10g single rate
SFP+ LR bailed, 10g single rate
DUAL RATE 1G/10G SFP+ SR (No Bail)
DUAL RATE 1G/10G SFP+ SR (No Bail)
DUAL RATE 1G/10G SFP+ LR (No Bail)
DUAL RATE 1G/10G SFP+ LR (No Bail)
82599-based adapters support all passive and active limiting direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
Laser turns off for SFP+ when ifconfig ethX down¶
“ifconfig ethX down” turns off the laser for 82599-based SFP+ fiber adapters. “ifconfig ethX up” turns on the laser. Alternatively, you can use “ip link set [down/up] dev ethX” to turn the laser off and on.
82599-based QSFP+ Adapters¶
NOTES: — If your 82599-based Intel(R) Network Adapter came with Intel optics, it only supports Intel optics. — 82599-based QSFP+ adapters only support 4×10 Gbps connections. 1×40 Gbps connections are not supported. QSFP+ link partners must be configured for 4×10 Gbps. — 82599-based QSFP+ adapters do not support automatic link speed detection. The link speed must be configured to either 10 Gbps or 1 Gbps to match the link partners speed capabilities. Incorrect speed configurations will result in failure to link. — Intel(R) Ethernet Converged Network Adapter X520-Q1 only supports the optics and direct attach cables listed below.
DUAL RATE 1G/10G QSFP+ SRL (bailed)
82599-based QSFP+ adapters support all passive and active limiting QSFP+ direct attach cables that comply with SFF-8436 v4.1 specifications.
82598-BASED ADAPTERS¶
NOTES: — Intel(r) Ethernet Network Adapters that support removable optical modules only support their original module type (for example, the Intel(R) 10 Gigabit SR Dual Port Express Module only supports SR optical modules). If you plug in a different type of module, the driver will not load. — Hot Swapping/hot plugging optical modules is not supported. — Only single speed, 10 gigabit modules are supported. — LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module types are not supported. Please see your system documentation for details.
The following is a list of SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices.
SFP+ SR bailed, 10g single rate
SFP+ SR bailed, 10g single rate
SFP+ LR bailed, 10g single rate
82598-based adapters support all passive direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables are not supported.
Third party optic modules and cables referred to above are listed only for the purpose of highlighting third party specifications and potential compatibility, and are not recommendations or endorsements or sponsorship of any third party’s product by Intel. Intel is not endorsing or promoting products made by any third party and the third party reference is provided only to share information regarding certain optic modules and cables with the above specifications. There may be other manufacturers or suppliers, producing or supplying optic modules and cables with similar or matching descriptions. Customers must use their own discretion and diligence to purchase optic modules and cables from any third party of their choice. Customers are solely responsible for assessing the suitability of the product and/or devices and for the selection of the vendor for purchasing any product. THE OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY PRODUCTS OR SELECTION OF VENDOR BY CUSTOMERS.
Command Line Parameters¶
max_vfs¶
This parameter adds support for SR-IOV. It causes the driver to spawn up to max_vfs worth of virtual functions. If the value is greater than 0 it will also force the VMDq parameter to be 1 or more.
NOTE: This parameter is only used on kernel 3.7.x and below. On kernel 3.8.x and above, use sysfs to enable VFs. Also, for Red Hat distributions, this parameter is only used on version 6.6 and older. For version 6.7 and newer, use sysfs. For example:
#echo $num_vf_enabled > /sys/class/net/$dev/device/sriov_numvfs // enable VFs #echo 0 > /sys/class/net/$dev/device/sriov_numvfs //disable VFs
The parameters for the driver are referenced by position. Thus, if you have a dual port adapter, or more than one adapter in your system, and want N virtual functions per port, you must specify a number for each port with each parameter separated by a comma. For example:
This will spawn 4 VFs on the first port.
This will spawn 2 VFs on the first port and 4 VFs on the second port.
NOTE: Caution must be used in loading the driver with these parameters. Depending on your system configuration, number of slots, etc., it is impossible to predict in all cases where the positions would be on the command line.
NOTE: Neither the device nor the driver control how VFs are mapped into config space. Bus layout will vary by operating system. On operating systems that support it, you can check sysfs to find the mapping.
NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering and VLAN tag stripping/insertion will remain enabled. Please remove the old VLAN filter before the new VLAN filter is added. For example,
ip link set eth0 vf 0 vlan 100 // set VLAN 100 for VF 0 ip link set eth0 vf 0 vlan 0 // Delete VLAN 100 ip link set eth0 vf 0 vlan 200 // set a new VLAN 200 for VF 0
With kernel 3.6, the driver supports the simultaneous usage of max_vfs and DCB features, subject to the constraints described below. Prior to kernel 3.6, the driver did not support the simultaneous operation of max_vfs greater than 0 and the DCB features (multiple traffic classes utilizing Priority Flow Control and Extended Transmission Selection).
When DCB is enabled, network traffic is transmitted and received through multiple traffic classes (packet buffers in the NIC). The traffic is associated with a specific class based on priority, which has a value of 0 through 7 used in the VLAN tag. When SR-IOV is not enabled, each traffic class is associated with a set of receive/transmit descriptor queue pairs. The number of queue pairs for a given traffic class depends on the hardware configuration. When SR-IOV is enabled, the descriptor queue pairs are grouped into pools. The Physical Function (PF) and each Virtual Function (VF) is allocated a pool of receive/transmit descriptor queue pairs. When multiple traffic classes are configured (for example, DCB is enabled), each pool contains a queue pair from each traffic class. When a single traffic class is configured in the hardware, the pools contain multiple queue pairs from the single traffic class.
The number of VFs that can be allocated depends on the number of traffic classes that can be enabled. The configurable number of traffic classes for each enabled VF is as follows: 0 — 15 VFs = Up to 8 traffic classes, depending on device support 16 — 31 VFs = Up to 4 traffic classes 32 — 63 VFs = 1 traffic class
When VFs are configured, the PF is allocated one pool as well. The PF supports the DCB features with the constraint that each traffic class will only use a single queue pair. When zero VFs are configured, the PF can support multiple queue pairs per traffic class.
allow_unsupported_sfp¶
This parameter allows unsupported and untested SFP+ modules on 82599-based adapters, as long as the type of module is known to the driver.