nfp.rst revision 7b53c036
1..  BSD LICENSE
2    Copyright(c) 2015 Netronome Systems, Inc. All rights reserved.
3    All rights reserved.
4
5    Redistribution and use in source and binary forms, with or without
6    modification, are permitted provided that the following conditions
7    are met:
8
9    * Redistributions of source code must retain the above copyright
10    notice, this list of conditions and the following disclaimer.
11    * Redistributions in binary form must reproduce the above copyright
12    notice, this list of conditions and the following disclaimer in
13    the documentation and/or other materials provided with the
14    distribution.
15    * Neither the name of Intel Corporation nor the names of its
16    contributors may be used to endorse or promote products derived
17    from this software without specific prior written permission.
18
19    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31NFP poll mode driver library
32============================
33
34Netronome's sixth generation of flow processors pack 216 programmable
35cores and over 100 hardware accelerators that uniquely combine packet,
36flow, security and content processing in a single device that scales
37up to 400 Gbps.
38
39This document explains how to use DPDK with the Netronome Poll Mode
40Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
41(NFP-6xxx).
42
43Currently the driver supports virtual functions (VFs) only.
44
45Dependencies
46------------
47
48Before using the Netronome's DPDK PMD some NFP-6xxx configuration,
49which is not related to DPDK, is required. The system requires
50installation of **Netronome's BSP (Board Support Package)** which includes
51Linux drivers, programs and libraries.
52
53If you have a NFP-6xxx device you should already have the code and
54documentation for doing this configuration. Contact
55**support@netronome.com** to obtain the latest available firmware.
56
57The NFP Linux kernel drivers (including the required PF driver for the
58NFP) are available on Github at
59**https://github.com/Netronome/nfp-drv-kmods** along with build
60instructions.
61
62DPDK runs in userspace and PMDs uses the Linux kernel UIO interface to
63allow access to physical devices from userspace. The NFP PMD requires
64the **igb_uio** UIO driver, available with DPDK, to perform correct
65initialization.
66
67Building the software
68---------------------
69
70Netronome's PMD code is provided in the **drivers/net/nfp** directory.
71Because Netronome´s BSP dependencies the driver is disabled by default
72in DPDK build using **common_linuxapp configuration** file. Enabling the
73driver or if you use another configuration file and want to have NFP
74support, this variable is needed:
75
76- **CONFIG_RTE_LIBRTE_NFP_PMD=y**
77
78Once DPDK is built all the DPDK apps and examples include support for
79the NFP PMD.
80
81
82System configuration
83--------------------
84
85Using the NFP PMD is not different to using other PMDs. Usual steps are:
86
87#. **Configure hugepages:** All major Linux distributions have the hugepages
88   functionality enabled by default. By default this allows the system uses for
89   working with transparent hugepages. But in this case some hugepages need to
90   be created/reserved for use with the DPDK through the hugetlbfs file system.
91   First the virtual file system need to be mounted:
92
93   .. code-block:: console
94
95      mount -t hugetlbfs none /mnt/hugetlbfs
96
97   The command uses the common mount point for this file system and it needs to
98   be created if necessary.
99
100   Configuring hugepages is performed via sysfs:
101
102   .. code-block:: console
103
104      /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
105
106   This sysfs file is used to specify the number of hugepages to reserve.
107   For example:
108
109   .. code-block:: console
110
111      echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
112
113   This will reserve 2GB of memory using 1024 2MB hugepages. The file may be
114   read to see if the operation was performed correctly:
115
116   .. code-block:: console
117
118      cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
119
120   The number of unused hugepages may also be inspected.
121
122   Before executing the DPDK app it should match the value of nr_hugepages.
123
124   .. code-block:: console
125
126      cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
127
128   The hugepages reservation should be performed at system initialization and
129   it is usual to use a kernel parameter for configuration. If the reservation
130   is attempted on a busy system it will likely fail. Reserving memory for
131   hugepages may be done adding the following to the grub kernel command line:
132
133   .. code-block:: console
134
135      default_hugepagesz=1M hugepagesz=2M hugepages=1024
136
137   This will reserve 2GBytes of memory using 2Mbytes huge pages.
138
139   Finally, for a NUMA system the allocation needs to be made on the correct
140   NUMA node. In a DPDK app there is a master core which will (usually) perform
141   memory allocation. It is important that some of the hugepages are reserved
142   on the NUMA memory node where the network device is attached. This is because
143   of a restriction in DPDK by which TX and RX descriptors rings must be created
144   on the master code.
145
146   Per-node allocation of hugepages may be inspected and controlled using sysfs.
147   For example:
148
149   .. code-block:: console
150
151      cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
152
153   For a NUMA system there will be a specific hugepage directory per node
154   allowing control of hugepage reservation. A common problem may occur when
155   hugepages reservation is performed after the system has been working for
156   some time. Configuration using the global sysfs hugepage interface will
157   succeed but the per-node allocations may be unsatisfactory.
158
159   The number of hugepages that need to be reserved depends on how the app uses
160   TX and RX descriptors, and packets mbufs.
161
162#. **Enable SR-IOV on the NFP-6xxx device:** The current NFP PMD works with
163   Virtual Functions (VFs) on a NFP device. Make sure that one of the Physical
164   Function (PF) drivers from the above Github repository is installed and
165   loaded.
166
167   Virtual Functions need to be enabled before they can be used with the PMD.
168   Before enabling the VFs it is useful to obtain information about the
169   current NFP PCI device detected by the system:
170
171   .. code-block:: console
172
173      lspci -d19ee:
174
175   Now, for example, configure two virtual functions on a NFP-6xxx device
176   whose PCI system identity is "0000:03:00.0":
177
178   .. code-block:: console
179
180      echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
181
182   The result of this command may be shown using lspci again:
183
184   .. code-block:: console
185
186      lspci -d19ee: -k
187
188   Two new PCI devices should appear in the output of the above command. The
189   -k option shows the device driver, if any, that devices are bound to.
190   Depending on the modules loaded at this point the new PCI devices may be
191   bound to nfp_netvf driver.
192
193#. **To install the uio kernel module (manually):** All major Linux
194   distributions have support for this kernel module so it is straightforward
195   to install it:
196
197   .. code-block:: console
198
199      modprobe uio
200
201   The module should now be listed by the lsmod command.
202
203#. **To install the igb_uio kernel module (manually):** This module is part
204   of DPDK sources and configured by default (CONFIG_RTE_EAL_IGB_UIO=y).
205
206   .. code-block:: console
207
208      modprobe igb_uio.ko
209
210   The module should now be listed by the lsmod command.
211
212   Depending on which NFP modules are loaded, it could be necessary to
213   detach NFP devices from the nfp_netvf module. If this is the case the
214   device needs to be unbound, for example:
215
216   .. code-block:: console
217
218      echo 0000:03:08.0 > /sys/bus/pci/devices/0000:03:08.0/driver/unbind
219
220      lspci -d19ee: -k
221
222   The output of lspci should now show that 0000:03:08.0 is not bound to
223   any driver.
224
225   The next step is to add the NFP PCI ID to the IGB UIO driver:
226
227   .. code-block:: console
228
229      echo 19ee 6003 > /sys/bus/pci/drivers/igb_uio/new_id
230
231   And then to bind the device to the igb_uio driver:
232
233   .. code-block:: console
234
235      echo 0000:03:08.0 > /sys/bus/pci/drivers/igb_uio/bind
236
237      lspci -d19ee: -k
238
239   lspci should show that device bound to igb_uio driver.
240
241#. **Using scripts to install and bind modules:** DPDK provides scripts which are
242   useful for installing the UIO modules and for binding the right device to those
243   modules avoiding doing so manually:
244
245   * **dpdk-setup.sh**
246   * **dpdk-devbind.py**
247
248   Configuration may be performed by running dpdk-setup.sh which invokes
249   dpdk-devbind.py as needed. Executing dpdk-setup.sh will display a menu of
250   configuration options.
251