vnet.md revision f5667c30
1
2VNET (VPP Network Stack)
3========================
4
5The files associated with the VPP network stack layer are located in the
6*./src/vnet* folder. The Network Stack Layer is basically an
7instantiation of the code in the other layers. This layer has a vnet
8library that provides vectorized layer-2 and 3 networking graph nodes, a
9packet generator, and a packet tracer.
10
11In terms of building a packet processing application, vnet provides a
12platform-independent subgraph to which one connects a couple of
13device-driver nodes.
14
15Typical RX connections include "ethernet-input" \[full software
16classification, feeds ipv4-input, ipv6-input, arp-input etc.\] and
17"ipv4-input-no-checksum" \[if hardware can classify, perform ipv4 header
18checksum\].
19
20Effective graph dispatch function coding
21----------------------------------------
22
23Over the 15 years, multiple coding styles have emerged: a
24single/dual/quad loop coding model (with variations) and a
25fully-pipelined coding model.
26
27Single/dual loops
28-----------------
29
30The single/dual/quad loop model variations conveniently solve problems
31where the number of items to process is not known in advance: typical
32hardware RX-ring processing. This coding style is also very effective
33when a given node will not need to cover a complex set of dependent
34reads.
35
36Here is an quad/single loop which can leverage up-to-avx512 SIMD vector
37units to convert buffer indices to buffer pointers:
38
39```c
40   static uword
41   simulated_ethernet_interface_tx (vlib_main_t * vm,
42   				 vlib_node_runtime_t *
43   				 node, vlib_frame_t * frame)
44   {
45     u32 n_left_from, *from;
46     u32 next_index = 0;
47     u32 n_bytes;
48     u32 thread_index = vm->thread_index;
49     vnet_main_t *vnm = vnet_get_main ();
50     vnet_interface_main_t *im = &vnm->interface_main;
51     vlib_buffer_t *bufs[VLIB_FRAME_SIZE], **b;
52     u16 nexts[VLIB_FRAME_SIZE], *next;
53
54     n_left_from = frame->n_vectors;
55     from = vlib_frame_vector_args (frame);
56
57     /*
58      * Convert up to VLIB_FRAME_SIZE indices in "from" to
59      * buffer pointers in bufs[]
60      */
61     vlib_get_buffers (vm, from, bufs, n_left_from);
62     b = bufs;
63     next = nexts;
64
65     /*
66      * While we have at least 4 vector elements (pkts) to process..
67      */
68     while (n_left_from >= 4)
69       {
70         /* Prefetch next quad-loop iteration. */
71         if (PREDICT_TRUE (n_left_from >= 8))
72   	   {
73   	     vlib_prefetch_buffer_header (b[4], STORE);
74   	     vlib_prefetch_buffer_header (b[5], STORE);
75   	     vlib_prefetch_buffer_header (b[6], STORE);
76   	     vlib_prefetch_buffer_header (b[7], STORE);
77           }
78
79         /*
80          * $$$ Process 4x packets right here...
81          * set next[0..3] to send the packets where they need to go
82          */
83
84          do_something_to (b[0]);
85          do_something_to (b[1]);
86          do_something_to (b[2]);
87          do_something_to (b[3]);
88
89         /* Process the next 0..4 packets */
90   	 b += 4;
91   	 next += 4;
92   	 n_left_from -= 4;
93   	}
94     /*
95      * Clean up 0...3 remaining packets at the end of the incoming frame
96      */
97     while (n_left_from > 0)
98       {
99         /*
100          * $$$ Process one packet right here...
101          * set next[0..3] to send the packets where they need to go
102          */
103          do_something_to (b[0]);
104
105         /* Process the next packet */
106         b += 1;
107         next += 1;
108         n_left_from -= 1;
109       }
110
111     /*
112      * Send the packets along their respective next-node graph arcs
113      * Considerable locality of reference is expected, most if not all
114      * packets in the inbound vector will traverse the same next-node
115      * arc
116      */
117     vlib_buffer_enqueue_to_next (vm, node, from, nexts, frame->n_vectors);
118
119     return frame->n_vectors;
120   }
121```
122
123Given a packet processing task to implement, it pays to scout around
124looking for similar tasks, and think about using the same coding
125pattern. It is not uncommon to recode a given graph node dispatch function
126several times during performance optimization.
127
128Creating Packets from Scratch
129-----------------------------
130
131At times, it's necessary to create packets from scratch and send
132them. Tasks like sending keepalives or actively opening connections
133come to mind. Its not difficult, but accurate buffer metadata setup is
134required.
135
136### Allocating Buffers
137
138Use vlib_buffer_alloc, which allocates a set of buffer indices. For
139low-performance applications, it's OK to allocate one buffer at a
140time. Note that vlib_buffer_alloc(...) does NOT initialize buffer
141metadata. See below.
142
143In high-performance cases, allocate a vector of buffer indices,
144and hand them out from the end of the vector; decrement _vec_len(..)
145as buffer indices are allocated. See tcp_alloc_tx_buffers(...) and
146tcp_get_free_buffer_index(...) for an example.
147
148### Buffer Initialization Example
149
150The following example shows the **main points**, but is not to be
151blindly cut-'n-pasted.
152
153```c
154  u32 bi0;
155  vlib_buffer_t *b0;
156  ip4_header_t *ip;
157  udp_header_t *udp;
158
159  /* Allocate a buffer */
160  if (vlib_buffer_alloc (vm, &bi0, 1) != 1)
161    return -1;
162
163  b0 = vlib_get_buffer (vm, bi0);
164
165  /* Initialize the buffer */
166  VLIB_BUFFER_TRACE_TRAJECTORY_INIT (b0);
167
168  /* At this point b0->current_data = 0, b0->current_length = 0 */
169
170  /*
171   * Copy data into the buffer. This example ASSUMES that data will fit
172   * in a single buffer, and is e.g. an ip4 packet.
173   */
174  if (have_packet_rewrite)
175     {
176       clib_memcpy (b0->data, data, vec_len (data));
177       b0->current_length = vec_len (data);
178     }
179  else
180     {
181       /* OR, build a udp-ip packet (for example) */
182       ip = vlib_buffer_get_current (b0);
183       udp = (udp_header_t *) (ip + 1);
184       data_dst = (u8 *) (udp + 1);
185
186       ip->ip_version_and_header_length = 0x45;
187       ip->ttl = 254;
188       ip->protocol = IP_PROTOCOL_UDP;
189       ip->length = clib_host_to_net_u16 (sizeof (*ip) + sizeof (*udp) +
190                  vec_len(udp_data));
191       ip->src_address.as_u32 = src_address->as_u32;
192       ip->dst_address.as_u32 = dst_address->as_u32;
193       udp->src_port = clib_host_to_net_u16 (src_port);
194       udp->dst_port = clib_host_to_net_u16 (dst_port);
195       udp->length = clib_host_to_net_u16 (vec_len (udp_data));
196       clib_memcpy (data_dst, udp_data, vec_len(udp_data));
197
198       if (compute_udp_checksum)
199         {
200           /* RFC 7011 section 10.3.2. */
201           udp->checksum = ip4_tcp_udp_compute_checksum (vm, b0, ip);
202           if (udp->checksum == 0)
203             udp->checksum = 0xffff;
204      }
205      b0->current_length = vec_len (sizeof (*ip) + sizeof (*udp) +
206                                   vec_len (udp_data));
207
208    }
209  b0->flags |= (VLIB_BUFFER_TOTAL_LENGTH_VALID;
210
211  /* sw_if_index 0 is the "local" interface, which always exists */
212  vnet_buffer (b0)->sw_if_index[VLIB_RX] = 0;
213
214  /* Use the default FIB index for tx lookup. Set non-zero to use another fib */
215  vnet_buffer (b0)->sw_if_index[VLIB_TX] = 0;
216
217```
218
219If your use-case calls for large packet transmission, use
220vlib_buffer_chain_append_data_with_alloc(...) to create the requisite
221buffer chain.
222
223### Enqueueing packets for lookup and transmission
224
225The simplest way to send a set of packets is to use
226vlib_get_frame_to_node(...) to allocate fresh frame(s) to
227ip4_lookup_node or ip6_lookup_node, add the constructed buffer
228indices, and dispatch the frame using vlib_put_frame_to_node(...).
229
230```c
231    vlib_frame_t *f;
232    f = vlib_get_frame_to_node (vm, ip4_lookup_node.index);
233    f->n_vectors = vec_len(buffer_indices_to_send);
234    to_next = vlib_frame_vector_args (f);
235
236    for (i = 0; i < vec_len (buffer_indices_to_send); i++)
237      to_next[i] = buffer_indices_to_send[i];
238
239    vlib_put_frame_to_node (vm, ip4_lookup_node_index, f);
240```
241
242It is inefficient to allocate and schedule single packet frames.
243That's typical in case you need to send one packet per second, but
244should **not** occur in a for-loop!
245
246Packet tracer
247-------------
248
249Vlib includes a frame element \[packet\] trace facility, with a simple
250debug CLI interface. The cli is straightforward: "trace add
251input-node-name count" to start capturing packet traces.
252
253To trace 100 packets on a typical x86\_64 system running the dpdk
254plugin: "trace add dpdk-input 100". When using the packet generator:
255"trace add pg-input 100"
256
257To display the packet trace: "show trace"
258
259Each graph node has the opportunity to capture its own trace data. It is
260almost always a good idea to do so. The trace capture APIs are simple.
261
262The packet capture APIs snapshoot binary data, to minimize processing at
263capture time. Each participating graph node initialization provides a
264vppinfra format-style user function to pretty-print data when required
265by the VLIB "show trace" command.
266
267Set the VLIB node registration ".format\_trace" member to the name of
268the per-graph node format function.
269
270Here's a simple example:
271
272```c
273    u8 * my_node_format_trace (u8 * s, va_list * args)
274    {
275        vlib_main_t * vm = va_arg (*args, vlib_main_t *);
276        vlib_node_t * node = va_arg (*args, vlib_node_t *);
277        my_node_trace_t * t = va_arg (*args, my_trace_t *);
278
279        s = format (s, "My trace data was: %d", t-><whatever>);
280
281        return s;
282    }
283```
284
285The trace framework hands the per-node format function the data it
286captured as the packet whizzed by. The format function pretty-prints the
287data as desired.
288
289Graph Dispatcher Pcap Tracing
290-----------------------------
291
292The vpp graph dispatcher knows how to capture vectors of packets in pcap
293format as they're dispatched. The pcap captures are as follows:
294
295```
296    VPP graph dispatch trace record description:
297
298        0                   1                   2                   3
299        0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
300       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
301       | Major Version | Minor Version | NStrings      | ProtoHint     |
302       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
303       | Buffer index (big endian)                                     |
304       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
305       + VPP graph node name ...     ...               | NULL octet    |
306       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
307       | Buffer Metadata ... ...                       | NULL octet    |
308       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
309       | Buffer Opaque ... ...                         | NULL octet    |
310       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
311       | Buffer Opaque 2 ... ...                       | NULL octet    |
312       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
313       | VPP ASCII packet trace (if NStrings > 4)      | NULL octet    |
314       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
315       | Packet data (up to 16K)                                       |
316       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
317```
318
319Graph dispatch records comprise a version stamp, an indication of how
320many NULL-terminated strings will follow the record header and preceed
321packet data, and a protocol hint.
322
323The buffer index is an opaque 32-bit cookie which allows consumers of
324these data to easily filter/track single packets as they traverse the
325forwarding graph.
326
327Multiple records per packet are normal, and to be expected. Packets
328will appear multipe times as they traverse the vpp forwarding
329graph. In this way, vpp graph dispatch traces are significantly
330different from regular network packet captures from an end-station.
331This property complicates stateful packet analysis.
332
333Restricting stateful analysis to records from a single vpp graph node
334such as "ethernet-input" seems likely to improve the situation.
335
336As of this writing: major version = 1, minor version = 0. Nstrings
337SHOULD be 4 or 5. Consumers SHOULD be wary values less than 4 or
338greater than 5. They MAY attempt to display the claimed number of
339strings, or they MAY treat the condition as an error.
340
341Here is the current set of protocol hints:
342
343```c
344    typedef enum
345      {
346        VLIB_NODE_PROTO_HINT_NONE = 0,
347        VLIB_NODE_PROTO_HINT_ETHERNET,
348        VLIB_NODE_PROTO_HINT_IP4,
349        VLIB_NODE_PROTO_HINT_IP6,
350        VLIB_NODE_PROTO_HINT_TCP,
351        VLIB_NODE_PROTO_HINT_UDP,
352        VLIB_NODE_N_PROTO_HINTS,
353      } vlib_node_proto_hint_t;
354```
355
356Example: VLIB_NODE_PROTO_HINT_IP6 means that the first octet of packet
357data SHOULD be 0x60, and should begin an ipv6 packet header.
358
359Downstream consumers of these data SHOULD pay attention to the
360protocol hint. They MUST tolerate inaccurate hints, which MAY occur
361from time to time.
362
363### Dispatch Pcap Trace Debug CLI
364
365To start a dispatch trace capture of up to 10,000 trace records:
366
367```
368     pcap dispatch trace on max 10000 file dispatch.pcap
369```
370
371To start a dispatch trace which will also include standard vpp packet
372tracing for packets which originate in dpdk-input:
373
374```
375     pcap dispatch trace on max 10000 file dispatch.pcap buffer-trace dpdk-input 1000
376```
377To save the pcap trace, e.g. in /tmp/dispatch.pcap:
378
379```
380    pcap dispatch trace off
381```
382
383### Wireshark dissection of dispatch pcap traces
384
385It almost goes without saying that we built a companion wireshark
386dissector to display these traces. As of this writing, we have
387upstreamed the wireshark dissector.
388
389Since it will be a while before wireshark/master/latest makes it into
390all of the popular Linux distros, please see the "How to build a vpp
391dispatch trace aware Wireshark" page for build info.
392
393Here is a sample packet dissection, with some fields omitted for
394clarity.  The point is that the wireshark dissector accurately
395displays **all** of the vpp buffer metadata, and the name of the graph
396node in question.
397
398```
399    Frame 1: 2216 bytes on wire (17728 bits), 2216 bytes captured (17728 bits)
400        Encapsulation type: USER 13 (58)
401        [Protocols in frame: vpp:vpp-metadata:vpp-opaque:vpp-opaque2:eth:ethertype:ip:tcp:data]
402    VPP Dispatch Trace
403        BufferIndex: 0x00036663
404    NodeName: ethernet-input
405    VPP Buffer Metadata
406        Metadata: flags:
407        Metadata: current_data: 0, current_length: 102
408        Metadata: current_config_index: 0, flow_id: 0, next_buffer: 0
409        Metadata: error: 0, n_add_refs: 0, buffer_pool_index: 0
410        Metadata: trace_index: 0, recycle_count: 0, len_not_first_buf: 0
411        Metadata: free_list_index: 0
412        Metadata:
413    VPP Buffer Opaque
414        Opaque: raw: 00000007 ffffffff 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
415        Opaque: sw_if_index[VLIB_RX]: 7, sw_if_index[VLIB_TX]: -1
416        Opaque: L2 offset 0, L3 offset 0, L4 offset 0, feature arc index 0
417        Opaque: ip.adj_index[VLIB_RX]: 0, ip.adj_index[VLIB_TX]: 0
418        Opaque: ip.flow_hash: 0x0, ip.save_protocol: 0x0, ip.fib_index: 0
419        Opaque: ip.save_rewrite_length: 0, ip.rpf_id: 0
420        Opaque: ip.icmp.type: 0 ip.icmp.code: 0, ip.icmp.data: 0x0
421        Opaque: ip.reass.next_index: 0, ip.reass.estimated_mtu: 0
422        Opaque: ip.reass.fragment_first: 0 ip.reass.fragment_last: 0
423        Opaque: ip.reass.range_first: 0 ip.reass.range_last: 0
424        Opaque: ip.reass.next_range_bi: 0x0, ip.reass.ip6_frag_hdr_offset: 0
425        Opaque: mpls.ttl: 0, mpls.exp: 0, mpls.first: 0, mpls.save_rewrite_length: 0, mpls.bier.n_bytes: 0
426        Opaque: l2.feature_bitmap: 00000000, l2.bd_index: 0, l2.l2_len: 0, l2.shg: 0, l2.l2fib_sn: 0, l2.bd_age: 0
427        Opaque: l2.feature_bitmap_input:   none configured, L2.feature_bitmap_output:   none configured
428        Opaque: l2t.next_index: 0, l2t.session_index: 0
429        Opaque: l2_classify.table_index: 0, l2_classify.opaque_index: 0, l2_classify.hash: 0x0
430        Opaque: policer.index: 0
431        Opaque: ipsec.flags: 0x0, ipsec.sad_index: 0
432        Opaque: map.mtu: 0
433        Opaque: map_t.v6.saddr: 0x0, map_t.v6.daddr: 0x0, map_t.v6.frag_offset: 0, map_t.v6.l4_offset: 0
434        Opaque: map_t.v6.l4_protocol: 0, map_t.checksum_offset: 0, map_t.mtu: 0
435        Opaque: ip_frag.mtu: 0, ip_frag.next_index: 0, ip_frag.flags: 0x0
436        Opaque: cop.current_config_index: 0
437        Opaque: lisp.overlay_afi: 0
438        Opaque: tcp.connection_index: 0, tcp.seq_number: 0, tcp.seq_end: 0, tcp.ack_number: 0, tcp.hdr_offset: 0, tcp.data_offset: 0
439        Opaque: tcp.data_len: 0, tcp.flags: 0x0
440        Opaque: sctp.connection_index: 0, sctp.sid: 0, sctp.ssn: 0, sctp.tsn: 0, sctp.hdr_offset: 0
441        Opaque: sctp.data_offset: 0, sctp.data_len: 0, sctp.subconn_idx: 0, sctp.flags: 0x0
442        Opaque: snat.flags: 0x0
443        Opaque:
444    VPP Buffer Opaque2
445        Opaque2: raw: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
446        Opaque2: qos.bits: 0, qos.source: 0
447        Opaque2: loop_counter: 0
448        Opaque2: gbp.flags: 0, gbp.src_epg: 0
449        Opaque2: pg_replay_timestamp: 0
450        Opaque2:
451    Ethernet II, Src: 06:d6:01:41:3b:92 (06:d6:01:41:3b:92), Dst: IntelCor_3d:f6    Transmission Control Protocol, Src Port: 22432, Dst Port: 54084, Seq: 1, Ack: 1, Len: 36
452        Source Port: 22432
453        Destination Port: 54084
454        TCP payload (36 bytes)
455    Data (36 bytes)
456
457    0000  cf aa 8b f5 53 14 d4 c7 29 75 3e 56 63 93 9d 11   ....S...)u>Vc...
458    0010  e5 f2 92 27 86 56 4c 21 ce c5 23 46 d7 eb ec 0d   ...'.VL!..#F....
459    0020  a8 98 36 5a                                       ..6Z
460        Data: cfaa8bf55314d4c729753e5663939d11e5f2922786564c21…
461        [Length: 36]
462```
463
464It's a matter of a couple of mouse-clicks in Wireshark to filter the
465trace to a specific buffer index. With that specific kind of filtration,
466one can watch a packet walk through the forwarding graph; noting any/all
467metadata changes, header checksum changes, and so forth.
468
469This should be of significant value when developing new vpp graph
470nodes. If new code mispositions b->current_data, it will be completely
471obvious from looking at the dispatch trace in wireshark.
472
473## pcap rx, tx, and drop tracing
474
475vpp also supports rx, tx, and drop packet capture in pcap format,
476through the "pcap trace" debug CLI command.
477
478This command is used to start or stop a packet capture, or show the
479status of packet capture. Each of "pcap trace rx", "pcap trace tx",
480and "pcap trace drop" is implemented.  Supply one or more of "rx",
481"tx", and "drop" to enable multiple simultaneous capture types.
482
483These commands have the following optional parameters:
484
485- <b>rx</b> - trace received packets.
486
487- <b>tx</b> - trace transmitted packets.
488
489- <b>drop</b> - trace dropped packets.
490
491- <b>max _nnnn_</b> - file size, number of packet captures. Once
492  <nnnn> packets have been received, the trace buffer buffer is flushed
493  to the indicated file. Defaults to 1000. Can only be updated if packet
494  capture is off.
495
496- <b>max-bytes-per-pkt _nnnn_</b> - maximum number of bytes to trace
497  on a per-paket basis. Must be >32 and less than 9000. Default value:
498  512.
499
500- <b>filter</b> - Use the pcap rx / tx / drop trace filter, which must
501  be configured. Use <b>classify filter pcap...</b> to configure the
502  filter. The filter will only be executed if the per-interface or
503  any-interface tests fail.
504
505- <b>intfc _interface_ | _any_</b> - Used to specify a given interface,
506  or use '<em>any</em>' to run packet capture on all interfaces.
507  '<em>any</em>' is the default if not provided. Settings from a previous
508  packet capture are preserved, so '<em>any</em>' can be used to reset
509  the interface setting.
510
511- <b>file _filename_</b> - Used to specify the output filename. The
512  file will be placed in the '<em>/tmp</em>' directory.  If _filename_
513  already exists, file will be overwritten. If no filename is
514  provided, '<em>/tmp/rx.pcap or tx.pcap</em>' will be used, depending
515  on capture direction. Can only be updated when pcap capture is off.
516
517- <b>status</b> - Displays the current status and configured
518  attributes associated with a packet capture. If packet capture is in
519  progress, '<em>status</em>' also will return the number of packets
520  currently in the buffer. Any additional attributes entered on
521  command line with a '<em>status</em>' request will be ignored.
522
523- <b>filter</b> - Capture packets which match the current packet
524  trace filter set. See next section. Configure the capture filter
525  first.
526
527## packet trace capture filtering
528
529The "classify filter pcap | <interface-name> " debug CLI command
530constructs an arbitrary set of packet classifier tables for use with
531"pcap rx | tx | drop trace," and with the vpp packet tracer on a
532per-interrface basis.
533
534Packets which match a rule in the classifier table chain will be
535traced. The tables are automatically ordered so that matches in the
536most specific table are tried first.
537
538It's reasonably likely that folks will configure a single table with
539one or two matches. As a result, we configure 8 hash buckets and 128K
540of match rule space by default. One can override the defaults by
541specifiying "buckets <nnn>" and "memory-size <xxx>" as desired.
542
543To build up complex filter chains, repeatedly issue the classify
544filter debug CLI command. Each command must specify the desired mask
545and match values. If a classifier table with a suitable mask already
546exists, the CLI command adds a match rule to the existing table.  If
547not, the CLI command add a new table and the indicated mask rule
548
549### Configure a simple pcap classify filter
550
551```
552    classify filter pcap mask l3 ip4 src match l3 ip4 src 192.168.1.11"
553    pcap rx trace on max 100 filter
554```
555
556### Configure a simple interface packet-tracer filter
557
558```
559    classify filter GigabitEthernet3/0/0 mask l3 ip4 src match l3 ip4 src 192.168.1.11"
560    [device-driver debug CLI TBD]
561```
562
563### Configure another fairly simple pcap classify filter
564
565```
566   classify filter pcap mask l3 ip4 src dst match l3 ip4 src 192.168.1.10 dst 192.168.2.10
567   pcap tx trace on max 100 filter
568```
569
570### Clear all current classifier filters
571
572```
573    classify filter del
574```
575
576### To inspect the classifier tables
577
578```
579   show classify table [verbose]
580```
581
582The verbose form displays all of the match rules, with hit-counters.
583
584### Terse description of the "mask <xxx>" syntax:
585
586```
587    l2 src dst proto tag1 tag2 ignore-tag1 ignore-tag2 cos1 cos2 dot1q dot1ad
588    l3 ip4 <ip4-mask> ip6 <ip6-mask>
589    <ip4-mask> version hdr_length src[/width] dst[/width]
590               tos length fragment_id ttl protocol checksum
591    <ip6-mask> version traffic-class flow-label src dst proto
592               payload_length hop_limit protocol
593    l4 tcp <tcp-mask> udp <udp_mask> src_port dst_port
594    <tcp-mask> src dst  # ports
595    <udp-mask> src_port dst_port
596```
597
598To construct **matches**, add the values to match after the indicated
599keywords in the mask syntax. For example: "... mask l3 ip4 src" ->
600"... match l3 ip4 src 192.168.1.11"
601