nfp.rst revision 97f17497
1..  BSD LICENSE
2    Copyright(c) 2015 Netronome Systems, Inc. All rights reserved.
3    All rights reserved.
4
5    Redistribution and use in source and binary forms, with or without
6    modification, are permitted provided that the following conditions
7    are met:
8
9    * Redistributions of source code must retain the above copyright
10    notice, this list of conditions and the following disclaimer.
11    * Redistributions in binary form must reproduce the above copyright
12    notice, this list of conditions and the following disclaimer in
13    the documentation and/or other materials provided with the
14    distribution.
15    * Neither the name of Intel Corporation nor the names of its
16    contributors may be used to endorse or promote products derived
17    from this software without specific prior written permission.
18
19    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31NFP poll mode driver library
32============================
33
34Netronome's sixth generation of flow processors pack 216 programmable
35cores and over 100 hardware accelerators that uniquely combine packet,
36flow, security and content processing in a single device that scales
37up to 400 Gbps.
38
39This document explains how to use DPDK with the Netronome Poll Mode
40Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
41(NFP-6xxx).
42
43Currently the driver supports virtual functions (VFs) only.
44
45Dependencies
46------------
47
48Before using the Netronome's DPDK PMD some NFP-6xxx configuration,
49which is not related to DPDK, is required. The system requires
50installation of **Netronome's BSP (Board Support Package)** which includes
51Linux drivers, programs and libraries.
52
53If you have a NFP-6xxx device you should already have the code and
54documentation for doing this configuration. Contact
55**support@netronome.com** to obtain the latest available firmware.
56
57The NFP Linux kernel drivers (including the required PF driver for the
58NFP) are available on Github at
59**https://github.com/Netronome/nfp-drv-kmods** along with build
60instructions.
61
62DPDK runs in userspace and PMDs uses the Linux kernel UIO interface to
63allow access to physical devices from userspace. The NFP PMD requires
64a separate UIO driver, **nfp_uio**, to perform correct
65initialization. This driver is part of Netronome´s BSP and it is
66equivalent to Intel's igb_uio driver.
67
68Building the software
69---------------------
70
71Netronome's PMD code is provided in the **drivers/net/nfp** directory.
72Because Netronome´s BSP dependencies the driver is disabled by default
73in DPDK build using **common_linuxapp configuration** file. Enabling the
74driver or if you use another configuration file and want to have NFP
75support, this variable is needed:
76
77- **CONFIG_RTE_LIBRTE_NFP_PMD=y**
78
79Once DPDK is built all the DPDK apps and examples include support for
80the NFP PMD.
81
82
83System configuration
84--------------------
85
86Using the NFP PMD is not different to using other PMDs. Usual steps are:
87
88#. **Configure hugepages:** All major Linux distributions have the hugepages
89   functionality enabled by default. By default this allows the system uses for
90   working with transparent hugepages. But in this case some hugepages need to
91   be created/reserved for use with the DPDK through the hugetlbfs file system.
92   First the virtual file system need to be mounted:
93
94   .. code-block:: console
95
96      mount -t hugetlbfs none /mnt/hugetlbfs
97
98   The command uses the common mount point for this file system and it needs to
99   be created if necessary.
100
101   Configuring hugepages is performed via sysfs:
102
103   .. code-block:: console
104
105      /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
106
107   This sysfs file is used to specify the number of hugepages to reserve.
108   For example:
109
110   .. code-block:: console
111
112      echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
113
114   This will reserve 2GB of memory using 1024 2MB hugepages. The file may be
115   read to see if the operation was performed correctly:
116
117   .. code-block:: console
118
119      cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
120
121   The number of unused hugepages may also be inspected.
122
123   Before executing the DPDK app it should match the value of nr_hugepages.
124
125   .. code-block:: console
126
127      cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
128
129   The hugepages reservation should be performed at system initialization and
130   it is usual to use a kernel parameter for configuration. If the reservation
131   is attempted on a busy system it will likely fail. Reserving memory for
132   hugepages may be done adding the following to the grub kernel command line:
133
134   .. code-block:: console
135
136      default_hugepagesz=1M hugepagesz=2M hugepages=1024
137
138   This will reserve 2GBytes of memory using 2Mbytes huge pages.
139
140   Finally, for a NUMA system the allocation needs to be made on the correct
141   NUMA node. In a DPDK app there is a master core which will (usually) perform
142   memory allocation. It is important that some of the hugepages are reserved
143   on the NUMA memory node where the network device is attached. This is because
144   of a restriction in DPDK by which TX and RX descriptors rings must be created
145   on the master code.
146
147   Per-node allocation of hugepages may be inspected and controlled using sysfs.
148   For example:
149
150   .. code-block:: console
151
152      cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
153
154   For a NUMA system there will be a specific hugepage directory per node
155   allowing control of hugepage reservation. A common problem may occur when
156   hugepages reservation is performed after the system has been working for
157   some time. Configuration using the global sysfs hugepage interface will
158   succeed but the per-node allocations may be unsatisfactory.
159
160   The number of hugepages that need to be reserved depends on how the app uses
161   TX and RX descriptors, and packets mbufs.
162
163#. **Enable SR-IOV on the NFP-6xxx device:** The current NFP PMD works with
164   Virtual Functions (VFs) on a NFP device. Make sure that one of the Physical
165   Function (PF) drivers from the above Github repository is installed and
166   loaded.
167
168   Virtual Functions need to be enabled before they can be used with the PMD.
169   Before enabling the VFs it is useful to obtain information about the
170   current NFP PCI device detected by the system:
171
172   .. code-block:: console
173
174      lspci -d19ee:
175
176   Now, for example, configure two virtual functions on a NFP-6xxx device
177   whose PCI system identity is "0000:03:00.0":
178
179   .. code-block:: console
180
181      echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
182
183   The result of this command may be shown using lspci again:
184
185   .. code-block:: console
186
187      lspci -d19ee: -k
188
189   Two new PCI devices should appear in the output of the above command. The
190   -k option shows the device driver, if any, that devices are bound to.
191   Depending on the modules loaded at this point the new PCI devices may be
192   bound to nfp_netvf driver.
193
194#. **To install the uio kernel module (manually):** All major Linux
195   distributions have support for this kernel module so it is straightforward
196   to install it:
197
198   .. code-block:: console
199
200      modprobe uio
201
202   The module should now be listed by the lsmod command.
203
204#. **To install the nfp_uio kernel module (manually):** This module supports
205   NFP-6xxx devices through the UIO interface.
206
207   This module is part of Netronome´s BSP and it should be available when the
208   BSP is installed.
209
210   .. code-block:: console
211
212      modprobe nfp_uio.ko
213
214   The module should now be listed by the lsmod command.
215
216   Depending on which NFP modules are loaded, nfp_uio may be automatically
217   bound to the NFP PCI devices by the system. Otherwise the binding needs
218   to be done explicitly. This is the case when nfp_netvf, the Linux kernel
219   driver for NFP VFs, was loaded when VFs were created. As described later
220   in this document this configuration may also be performed using scripts
221   provided by the Netronome´s BSP.
222
223   First the device needs to be unbound, for example from the nfp_netvf
224   driver:
225
226   .. code-block:: console
227
228      echo 0000:03:08.0 > /sys/bus/pci/devices/0000:03:08.0/driver/unbind
229
230      lspci -d19ee: -k
231
232   The output of lspci should now show that 0000:03:08.0 is not bound to
233   any driver.
234
235   The next step is to add the NFP PCI ID to the NFP UIO driver:
236
237   .. code-block:: console
238
239      echo 19ee 6003 > /sys/bus/pci/drivers/nfp_uio/new_id
240
241   And then to bind the device to the nfp_uio driver:
242
243   .. code-block:: console
244
245      echo 0000:03:08.0 > /sys/bus/pci/drivers/nfp_uio/bind
246
247      lspci -d19ee: -k
248
249   lspci should show that device bound to nfp_uio driver.
250
251#. **Using tools from Netronome´s BSP to install and bind modules:** DPDK provides
252   scripts which are useful for installing the UIO modules and for binding the
253   right device to those modules avoiding doing so manually. However, these scripts
254   have not support for Netronome´s UIO driver. Along with drivers, the BSP installs
255   those DPDK scripts slightly modified with support for Netronome´s UIO driver.
256
257   Those specific scripts can be found in Netronome´s BSP installation directory.
258   Refer to BSP documentation for more information.
259
260   * **setup.sh**
261   * **dpdk_nic_bind.py**
262
263   Configuration may be performed by running setup.sh which invokes
264   dpdk_nic_bind.py as needed. Executing setup.sh will display a menu of
265   configuration options.
266