kernel_nic_interface.rst revision 97f17497
1..  BSD LICENSE
2    Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
3    All rights reserved.
4
5    Redistribution and use in source and binary forms, with or without
6    modification, are permitted provided that the following conditions
7    are met:
8
9    * Redistributions of source code must retain the above copyright
10    notice, this list of conditions and the following disclaimer.
11    * Redistributions in binary form must reproduce the above copyright
12    notice, this list of conditions and the following disclaimer in
13    the documentation and/or other materials provided with the
14    distribution.
15    * Neither the name of Intel Corporation nor the names of its
16    contributors may be used to endorse or promote products derived
17    from this software without specific prior written permission.
18
19    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31.. _kni:
32
33Kernel NIC Interface
34====================
35
36The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
37
38The benefits of using the DPDK KNI are:
39
40*   Faster than existing Linux TUN/TAP interfaces
41    (by eliminating system calls and copy_to_user()/copy_from_user() operations.
42
43*   Allows management of DPDK ports using standard Linux net tools such as ethtool, ifconfig and tcpdump.
44
45*   Allows an interface with the kernel network stack.
46
47The components of an application using the DPDK Kernel NIC Interface are shown in :numref:`figure_kernel_nic_intf`.
48
49.. _figure_kernel_nic_intf:
50
51.. figure:: img/kernel_nic_intf.*
52
53   Components of a DPDK KNI Application
54
55
56The DPDK KNI Kernel Module
57--------------------------
58
59The KNI kernel loadable module provides support for two types of devices:
60
61*   A Miscellaneous device (/dev/kni) that:
62
63    *   Creates net devices (via ioctl  calls).
64
65    *   Maintains a kernel thread context shared by all KNI instances
66        (simulating the RX side of the net driver).
67
68    *   For single kernel thread mode, maintains a kernel thread context shared by all KNI instances
69        (simulating the RX side of the net driver).
70
71    *   For multiple kernel thread mode, maintains a kernel thread context for each KNI instance
72        (simulating the RX side of the new driver).
73
74*   Net device:
75
76    *   Net functionality provided by implementing several operations such as netdev_ops,
77        header_ops, ethtool_ops that are defined by struct net_device,
78        including support for DPDK mbufs and FIFOs.
79
80    *   The interface name is provided from userspace.
81
82    *   The MAC address can be the real NIC MAC address or random.
83
84KNI Creation and Deletion
85-------------------------
86
87The KNI interfaces are created by a DPDK application dynamically.
88The interface name and FIFO details are provided by the application through an ioctl call
89using the rte_kni_device_info struct which contains:
90
91*   The interface name.
92
93*   Physical addresses of the corresponding memzones for the relevant FIFOs.
94
95*   Mbuf mempool details, both physical and virtual (to calculate the offset for mbuf pointers).
96
97*   PCI information.
98
99*   Core affinity.
100
101Refer to rte_kni_common.h in the DPDK source code for more details.
102
103The physical addresses will be re-mapped into the kernel address space and stored in separate KNI contexts.
104
105The KNI interfaces can be deleted by a DPDK application dynamically after being created.
106Furthermore, all those KNI interfaces not deleted will be deleted on the release operation
107of the miscellaneous device (when the DPDK application is closed).
108
109DPDK mbuf Flow
110--------------
111
112To minimize the amount of DPDK code running in kernel space, the mbuf mempool is managed in userspace only.
113The kernel module will be aware of mbufs,
114but all mbuf allocation and free operations will be handled by the DPDK application only.
115
116:numref:`figure_pkt_flow_kni` shows a typical scenario with packets sent in both directions.
117
118.. _figure_pkt_flow_kni:
119
120.. figure:: img/pkt_flow_kni.*
121
122   Packet Flow via mbufs in the DPDK KNI
123
124
125Use Case: Ingress
126-----------------
127
128On the DPDK RX side, the mbuf is allocated by the PMD in the RX thread context.
129This thread will enqueue the mbuf in the rx_q FIFO.
130The KNI thread will poll all KNI active devices for the rx_q.
131If an mbuf is dequeued, it will be converted to a sk_buff and sent to the net stack via netif_rx().
132The dequeued mbuf must be freed, so the same pointer is sent back in the free_q FIFO.
133
134The RX thread, in the same main loop, polls this FIFO and frees the mbuf after dequeuing it.
135
136Use Case: Egress
137----------------
138
139For packet egress the DPDK application must first enqueue several mbufs to create an mbuf cache on the kernel side.
140
141The packet is received from the Linux net stack, by calling the kni_net_tx() callback.
142The mbuf is dequeued (without waiting due the cache) and filled with data from sk_buff.
143The sk_buff is then freed and the mbuf sent in the tx_q FIFO.
144
145The DPDK TX thread dequeues the mbuf and sends it to the PMD (via rte_eth_tx_burst()).
146It then puts the mbuf back in the cache.
147
148Ethtool
149-------
150
151Ethtool is a Linux-specific tool with corresponding support in the kernel
152where each net device must register its own callbacks for the supported operations.
153The current implementation uses the igb/ixgbe modified Linux drivers for ethtool support.
154Ethtool is not supported in i40e and VMs (VF or EM devices).
155
156Link state and MTU change
157-------------------------
158
159Link state and MTU change are network interface specific operations usually done via ifconfig.
160The request is initiated from the kernel side (in the context of the ifconfig process)
161and handled by the user space DPDK application.
162The application polls the request, calls the application handler and returns the response back into the kernel space.
163
164The application handlers can be registered upon interface creation or explicitly registered/unregistered in runtime.
165This provides flexibility in multiprocess scenarios
166(where the KNI is created in the primary process but the callbacks are handled in the secondary one).
167The constraint is that a single process can register and handle the requests.
168
169KNI Working as a Kernel vHost Backend
170-------------------------------------
171
172vHost is a kernel module usually working as the backend of virtio (a para- virtualization driver framework)
173to accelerate the traffic from the guest to the host.
174The DPDK Kernel NIC interface provides the ability to hookup vHost traffic into userspace DPDK application.
175Together with the DPDK PMD virtio, it significantly improves the throughput between guest and host.
176In the scenario where DPDK is running as fast path in the host, kni-vhost is an efficient path for the traffic.
177
178Overview
179~~~~~~~~
180
181vHost-net has three kinds of real backend implementations. They are: 1) tap, 2) macvtap and 3) RAW socket.
182The main idea behind kni-vhost is making the KNI work as a RAW socket, attaching it as the backend instance of vHost-net.
183It is using the existing interface with vHost-net, so it does not require any kernel hacking,
184and is fully-compatible with the kernel vhost module.
185As vHost is still taking responsibility for communicating with the front-end virtio,
186it naturally supports both legacy virtio -net and the DPDK PMD virtio.
187There is a little penalty that comes from the non-polling mode of vhost.
188However, it scales throughput well when using KNI in multi-thread mode.
189
190.. _figure_vhost_net_arch2:
191
192.. figure:: img/vhost_net_arch.*
193
194   vHost-net Architecture Overview
195
196
197Packet Flow
198~~~~~~~~~~~
199
200There is only a minor difference from the original KNI traffic flows.
201On transmit side, vhost kthread calls the RAW socket's ops sendmsg and it puts the packets into the KNI transmit FIFO.
202On the receive side, the kni kthread gets packets from the KNI receive FIFO, puts them into the queue of the raw socket,
203and wakes up the task in vhost kthread to begin receiving.
204All the packet copying, irrespective of whether it is on the transmit or receive side,
205happens in the context of vhost kthread.
206Every vhost-net device is exposed to a front end virtio device in the guest.
207
208.. _figure_kni_traffic_flow:
209
210.. figure:: img/kni_traffic_flow.*
211
212   KNI Traffic Flow
213
214
215Sample Usage
216~~~~~~~~~~~~
217
218Before starting to use KNI as the backend of vhost, the CONFIG_RTE_KNI_VHOST configuration option must be turned on.
219Otherwise, by default, KNI will not enable its backend support capability.
220
221Of course, as a prerequisite, the vhost/vhost-net kernel CONFIG should be chosen before compiling the kernel.
222
223#.  Compile the DPDK and insert uio_pci_generic/igb_uio kernel modules as normal.
224
225#.  Insert the KNI kernel module:
226
227    .. code-block:: console
228
229        insmod ./rte_kni.ko
230
231    If using KNI in multi-thread mode, use the following command line:
232
233    .. code-block:: console
234
235        insmod ./rte_kni.ko kthread_mode=multiple
236
237#.  Running the KNI sample application:
238
239    .. code-block:: console
240
241        examples/kni/build/app/kni -c -0xf0 -n 4 -- -p 0x3 -P --config="(0,4,6),(1,5,7)"
242
243    This command runs the kni sample application with two physical ports.
244    Each port pins two forwarding cores (ingress/egress) in user space.
245
246#.  Assign a raw socket to vhost-net during qemu-kvm startup.
247    The DPDK does not provide a script to do this since it is easy for the user to customize.
248    The following shows the key steps to launch qemu-kvm with kni-vhost:
249
250    .. code-block:: bash
251
252        #!/bin/bash
253        echo 1 > /sys/class/net/vEth0/sock_en
254        fd=`cat /sys/class/net/vEth0/sock_fd`
255        qemu-kvm \
256        -name vm1 -cpu host -m 2048 -smp 1 -hda /opt/vm-fc16.img \
257        -netdev tap,fd=$fd,id=hostnet1,vhost=on \
258        -device virti-net-pci,netdev=hostnet1,id=net1,bus=pci.0,addr=0x4
259
260It is simple to enable raw socket using sysfs sock_en and get raw socket fd using sock_fd under the KNI device node.
261
262Then, using the qemu-kvm command with the -netdev option to assign such raw socket fd as vhost's backend.
263
264.. note::
265
266    The key word tap must exist as qemu-kvm now only supports vhost with a tap backend, so here we cheat qemu-kvm by an existing fd.
267
268Compatibility Configure Option
269~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
270
271There is a CONFIG_RTE_KNI_VHOST_VNET_HDR_EN configuration option in DPDK configuration file.
272By default, it set to n, which means do not turn on the virtio net header,
273which is used to support additional features (such as, csum offload, vlan offload, generic-segmentation and so on),
274since the kni-vhost does not yet support those features.
275
276Even if the option is turned on, kni-vhost will ignore the information that the header contains.
277When working with legacy virtio on the guest, it is better to turn off unsupported offload features using ethtool -K.
278Otherwise, there may be problems such as an incorrect L4 checksum error.
279