mlx5.rst revision 6cfa4f77
1..  BSD LICENSE
2    Copyright 2015 6WIND S.A.
3
4    Redistribution and use in source and binary forms, with or without
5    modification, are permitted provided that the following conditions
6    are met:
7
8    * Redistributions of source code must retain the above copyright
9    notice, this list of conditions and the following disclaimer.
10    * Redistributions in binary form must reproduce the above copyright
11    notice, this list of conditions and the following disclaimer in
12    the documentation and/or other materials provided with the
13    distribution.
14    * Neither the name of 6WIND S.A. nor the names of its
15    contributors may be used to endorse or promote products derived
16    from this software without specific prior written permission.
17
18    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
19    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
20    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
21    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
22    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
23    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
24    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
25    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
26    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
27    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
29
30MLX5 poll mode driver
31=====================
32
33The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support for
34**Mellanox ConnectX-4** and **Mellanox ConnectX-4 Lx** families of
3510/25/40/50/100 Gb/s adapters as well as their virtual functions (VF) in
36SR-IOV context.
37
38Information and documentation about these adapters can be found on the
39`Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
40`Mellanox community <http://community.mellanox.com/welcome>`__.
41
42There is also a `section dedicated to this poll mode driver
43<http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk>`__.
44
45.. note::
46
47   Due to external dependencies, this driver is disabled by default. It must
48   be enabled manually by setting ``CONFIG_RTE_LIBRTE_MLX5_PMD=y`` and
49   recompiling DPDK.
50
51Implementation details
52----------------------
53
54Besides its dependency on libibverbs (that implies libmlx5 and associated
55kernel support), librte_pmd_mlx5 relies heavily on system calls for control
56operations such as querying/updating the MTU and flow control parameters.
57
58For security reasons and robustness, this driver only deals with virtual
59memory addresses. The way resources allocations are handled by the kernel
60combined with hardware specifications that allow it to handle virtual memory
61addresses directly ensure that DPDK applications cannot access random
62physical memory (or memory that does not belong to the current process).
63
64This capability allows the PMD to coexist with kernel network interfaces
65which remain functional, although they stop receiving unicast packets as
66long as they share the same MAC address.
67
68Enabling librte_pmd_mlx5 causes DPDK applications to be linked against
69libibverbs.
70
71Features
72--------
73
74- Multiple TX and RX queues.
75- Support for scattered TX and RX frames.
76- IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
77- Several RSS hash keys, one for each flow type.
78- Configurable RETA table.
79- Support for multiple MAC addresses.
80- VLAN filtering.
81- RX VLAN stripping.
82- TX VLAN insertion.
83- RX CRC stripping configuration.
84- Promiscuous mode.
85- Multicast promiscuous mode.
86- Hardware checksum offloads.
87- Flow director (RTE_FDIR_MODE_PERFECT and RTE_FDIR_MODE_PERFECT_MAC_VLAN).
88- Secondary process TX is supported.
89- KVM and VMware ESX SR-IOV modes are supported.
90
91Limitations
92-----------
93
94- Inner RSS for VXLAN frames is not supported yet.
95- Port statistics through software counters only.
96- Hardware checksum offloads for VXLAN inner header are not supported yet.
97- Secondary process RX is not supported.
98
99Configuration
100-------------
101
102Compilation options
103~~~~~~~~~~~~~~~~~~~
104
105These options can be modified in the ``.config`` file.
106
107- ``CONFIG_RTE_LIBRTE_MLX5_PMD`` (default **n**)
108
109  Toggle compilation of librte_pmd_mlx5 itself.
110
111- ``CONFIG_RTE_LIBRTE_MLX5_DEBUG`` (default **n**)
112
113  Toggle debugging code and stricter compilation flags. Enabling this option
114  adds additional run-time checks and debugging messages at the cost of
115  lower performance.
116
117- ``CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACHE`` (default **8**)
118
119  Maximum number of cached memory pools (MPs) per TX queue. Each MP from
120  which buffers are to be transmitted must be associated to memory regions
121  (MRs). This is a slow operation that must be cached.
122
123  This value is always 1 for RX queues since they use a single MP.
124
125Environment variables
126~~~~~~~~~~~~~~~~~~~~~
127
128- ``MLX5_PMD_ENABLE_PADDING``
129
130  Enables HW packet padding in PCI bus transactions.
131
132  When packet size is cache aligned and CRC stripping is enabled, 4 fewer
133  bytes are written to the PCI bus. Enabling padding makes such packets
134  aligned again.
135
136  In cases where PCI bandwidth is the bottleneck, padding can improve
137  performance by 10%.
138
139  This is disabled by default since this can also decrease performance for
140  unaligned packet sizes.
141
142Run-time configuration
143~~~~~~~~~~~~~~~~~~~~~~
144
145- librte_pmd_mlx5 brings kernel network interfaces up during initialization
146  because it is affected by their state. Forcing them down prevents packets
147  reception.
148
149- **ethtool** operations on related kernel interfaces also affect the PMD.
150
151- ``rxq_cqe_comp_en`` parameter [int]
152
153  A nonzero value enables the compression of CQE on RX side. This feature
154  allows to save PCI bandwidth and improve performance at the cost of a
155  slightly higher CPU usage.  Enabled by default.
156
157  Supported on:
158
159  - x86_64 with ConnectX4 and ConnectX4 LX
160  - Power8 with ConnectX4 LX
161
162- ``txq_inline`` parameter [int]
163
164  Amount of data to be inlined during TX operations. Improves latency.
165  Can improve PPS performance when PCI back pressure is detected and may be
166  useful for scenarios involving heavy traffic on many queues.
167
168  It is not enabled by default (set to 0) since the additional software
169  logic necessary to handle this mode can lower performance when back
170  pressure is not expected.
171
172- ``txqs_min_inline`` parameter [int]
173
174  Enable inline send only when the number of TX queues is greater or equal
175  to this value.
176
177  This option should be used in combination with ``txq_inline`` above.
178
179- ``txq_mpw_en`` parameter [int]
180
181  A nonzero value enables multi-packet send. This feature allows the TX
182  burst function to pack up to five packets in two descriptors in order to
183  save PCI bandwidth and improve performance at the cost of a slightly
184  higher CPU usage.
185
186  It is currently only supported on the ConnectX-4 Lx family of adapters.
187  Enabled by default.
188
189Prerequisites
190-------------
191
192This driver relies on external libraries and kernel drivers for resources
193allocations and initialization. The following dependencies are not part of
194DPDK and must be installed separately:
195
196- **libibverbs**
197
198  User space Verbs framework used by librte_pmd_mlx5. This library provides
199  a generic interface between the kernel and low-level user space drivers
200  such as libmlx5.
201
202  It allows slow and privileged operations (context initialization, hardware
203  resources allocations) to be managed by the kernel and fast operations to
204  never leave user space.
205
206- **libmlx5**
207
208  Low-level user space driver library for Mellanox ConnectX-4 devices,
209  it is automatically loaded by libibverbs.
210
211  This library basically implements send/receive calls to the hardware
212  queues.
213
214- **Kernel modules** (mlnx-ofed-kernel)
215
216  They provide the kernel-side Verbs API and low level device drivers that
217  manage actual hardware initialization and resources sharing with user
218  space processes.
219
220  Unlike most other PMDs, these modules must remain loaded and bound to
221  their devices:
222
223  - mlx5_core: hardware driver managing Mellanox ConnectX-4 devices and
224    related Ethernet kernel network devices.
225  - mlx5_ib: InifiniBand device driver.
226  - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
227
228- **Firmware update**
229
230  Mellanox OFED releases include firmware updates for ConnectX-4 adapters.
231
232  Because each release provides new features, these updates must be applied to
233  match the kernel modules and libraries they come with.
234
235.. note::
236
237   Both libraries are BSD and GPL licensed. Linux kernel modules are GPL
238   licensed.
239
240Currently supported by DPDK:
241
242- Mellanox OFED **3.3-1.0.0.0** and **3.3-2.0.0.0**.
243
244- Minimum firmware version:
245
246  - ConnectX-4: **12.16.1006**
247  - ConnectX-4 Lx: **14.16.1006**
248
249Getting Mellanox OFED
250~~~~~~~~~~~~~~~~~~~~~
251
252While these libraries and kernel modules are available on OpenFabrics
253Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
254managers on most distributions, this PMD requires Ethernet extensions that
255may not be supported at the moment (this is a work in progress).
256
257`Mellanox OFED
258<http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux>`__
259includes the necessary support and should be used in the meantime. For DPDK,
260only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are
261required from that distribution.
262
263.. note::
264
265   Several versions of Mellanox OFED are available. Installing the version
266   this DPDK release was developed and tested against is strongly
267   recommended. Please check the `prerequisites`_.
268
269Notes for testpmd
270-----------------
271
272Compared to librte_pmd_mlx4 that implements a single RSS configuration per
273port, librte_pmd_mlx5 supports per-protocol RSS configuration.
274
275Since ``testpmd`` defaults to IP RSS mode and there is currently no
276command-line parameter to enable additional protocols (UDP and TCP as well
277as IP), the following commands must be entered from its CLI to get the same
278behavior as librte_pmd_mlx4:
279
280.. code-block:: console
281
282   > port stop all
283   > port config all rss all
284   > port start all
285
286Usage example
287-------------
288
289This section demonstrates how to launch **testpmd** with Mellanox ConnectX-4
290devices managed by librte_pmd_mlx5.
291
292#. Load the kernel modules:
293
294   .. code-block:: console
295
296      modprobe -a ib_uverbs mlx5_core mlx5_ib
297
298   Alternatively if MLNX_OFED is fully installed, the following script can
299   be run:
300
301   .. code-block:: console
302
303      /etc/init.d/openibd restart
304
305   .. note::
306
307      User space I/O kernel modules (uio and igb_uio) are not used and do
308      not have to be loaded.
309
310#. Make sure Ethernet interfaces are in working order and linked to kernel
311   verbs. Related sysfs entries should be present:
312
313   .. code-block:: console
314
315      ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
316
317   Example output:
318
319   .. code-block:: console
320
321      eth30
322      eth31
323      eth32
324      eth33
325
326#. Optionally, retrieve their PCI bus addresses for whitelisting:
327
328   .. code-block:: console
329
330      {
331          for intf in eth2 eth3 eth4 eth5;
332          do
333              (cd "/sys/class/net/${intf}/device/" && pwd -P);
334          done;
335      } |
336      sed -n 's,.*/\(.*\),-w \1,p'
337
338   Example output:
339
340   .. code-block:: console
341
342      -w 0000:05:00.1
343      -w 0000:06:00.0
344      -w 0000:06:00.1
345      -w 0000:05:00.0
346
347#. Request huge pages:
348
349   .. code-block:: console
350
351      echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
352
353#. Start testpmd with basic parameters:
354
355   .. code-block:: console
356
357      testpmd -c 0xff00 -n 4 -w 05:00.0 -w 05:00.1 -w 06:00.0 -w 06:00.1 -- --rxq=2 --txq=2 -i
358
359   Example output:
360
361   .. code-block:: console
362
363      [...]
364      EAL: PCI device 0000:05:00.0 on NUMA socket 0
365      EAL:   probe driver: 15b3:1013 librte_pmd_mlx5
366      PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
367      PMD: librte_pmd_mlx5: 1 port(s) detected
368      PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
369      EAL: PCI device 0000:05:00.1 on NUMA socket 0
370      EAL:   probe driver: 15b3:1013 librte_pmd_mlx5
371      PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
372      PMD: librte_pmd_mlx5: 1 port(s) detected
373      PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
374      EAL: PCI device 0000:06:00.0 on NUMA socket 0
375      EAL:   probe driver: 15b3:1013 librte_pmd_mlx5
376      PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
377      PMD: librte_pmd_mlx5: 1 port(s) detected
378      PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
379      EAL: PCI device 0000:06:00.1 on NUMA socket 0
380      EAL:   probe driver: 15b3:1013 librte_pmd_mlx5
381      PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
382      PMD: librte_pmd_mlx5: 1 port(s) detected
383      PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
384      Interactive-mode selected
385      Configuring Port 0 (socket 0)
386      PMD: librte_pmd_mlx5: 0x8cba80: TX queues number update: 0 -> 2
387      PMD: librte_pmd_mlx5: 0x8cba80: RX queues number update: 0 -> 2
388      Port 0: E4:1D:2D:E7:0C:FE
389      Configuring Port 1 (socket 0)
390      PMD: librte_pmd_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
391      PMD: librte_pmd_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
392      Port 1: E4:1D:2D:E7:0C:FF
393      Configuring Port 2 (socket 0)
394      PMD: librte_pmd_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
395      PMD: librte_pmd_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
396      Port 2: E4:1D:2D:E7:0C:FA
397      Configuring Port 3 (socket 0)
398      PMD: librte_pmd_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
399      PMD: librte_pmd_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
400      Port 3: E4:1D:2D:E7:0C:FB
401      Checking link statuses...
402      Port 0 Link Up - speed 40000 Mbps - full-duplex
403      Port 1 Link Up - speed 40000 Mbps - full-duplex
404      Port 2 Link Up - speed 10000 Mbps - full-duplex
405      Port 3 Link Up - speed 10000 Mbps - full-duplex
406      Done
407      testpmd>
408