This comes from OVS code and shows OVS thread spinning, not DPDK PMD.
Blame the OVS application for not using e.g. _mm_pause() and burning
the CPU like crazy.
> > No, it is not PMD that runs the processor in a polling loop.
> > It is the application itself, thay may or may not busy loop,
> > depending on application programmers choice.
>
> From one of my earlier references [2]:
>
> "we found that a poll mode driver (PMD)
> thread accounted for approximately 99.7 percent
> CPU occupancy (a full core utilization)."
>
> And further on:
>
> "we found that the thread kept spinning on the following code block:
>
> *for ( ; ; ) {for ( i = 0; i < poll_cnt; i ++) {dp_netdev_process_rxq_port
> (pmd, list[i].port, poll_list[i].rx) ;}}*
> This indicates that the thread was continuously
> monitoring and executing the receiving data path."
This comes from OVS code and shows OVS thread spinning, not DPDK PMD.
Blame the OVS application for not using e.g. _mm_pause() and burning
the CPU like crazy.
For comparison, take a look at top+i7z output from DPDK-based 100G DDoS
scrubber currently lifting some low traffic using cores 1-13 on 16 core
host. It uses naive DPDK::rte_pause() throttling to enter C1.
Tasks: 342 total, 1 running, 195 sleeping, 0 stopped, 0 zombie
%Cpu(s): 6.6 us, 0.6 sy, 0.0 ni, 89.7 id, 3.1 wa, 0.0 hi, 0.0 si, 0.0 st
Core [core-id] :Actual Freq (Mult.) C0% Halt(C1)% C3 % C6 % Temp VCore
Core 1 [0]: 1467.73 (14.68x) 2.15 5.35 1 92.3 43 0.6724
Core 2 [1]: 1201.09 (12.01x) 11.7 93.9 0 0 39 0.6575
Core 3 [2]: 1200.06 (12.00x) 11.8 93.8 0 0 42 0.6543
Core 4 [3]: 1200.14 (12.00x) 11.8 93.8 0 0 41 0.6549
Core 5 [4]: 1200.10 (12.00x) 11.8 93.8 0 0 41 0.6526
Core 6 [5]: 1200.12 (12.00x) 11.8 93.8 0 0 40 0.6559
Core 7 [6]: 1201.01 (12.01x) 11.8 93.8 0 0 41 0.6559
Core 8 [7]: 1201.02 (12.01x) 11.8 93.8 0 0 43 0.6525
Core 9 [8]: 1201.00 (12.01x) 11.8 93.8 0 0 41 0.6857
Core 10 [9]: 1201.04 (12.01x) 11.8 93.8 0 0 40 0.6541
Core 11 [10]: 1201.95 (12.02x) 13.6 92.9 0 0 40 0.6558
Core 12 [11]: 1201.02 (12.01x) 11.8 93.8 0 0 42 0.6526
Core 13 [12]: 1204.97 (12.05x) 17.6 90.8 0 0 45 0.6814
Core 14 [13]: 1248.39 (12.48x) 28.2 84.7 0 0 41 0.6855
Core 15 [14]: 2790.74 (27.91x) 91.9 0 1 1 41 0.8885 <-- not PMD
Core 16 [15]: 1262.29 (12.62x) 13.1 34.9 1.7 56.2 43 0.6616
$ dataplanectl stats fcore | grep total
fcore total idle 393788223887 work 860443658 (0.2%) (forced-idle 7458486526622) recv 202201388561 drop 61259353721 (30.3%) limit 269909758 (0.1%) pass 140606076622 (69.6%) ingress 66048460 (0.0%/0.0%) sent 162580376914 (80.4%/100.0%) overflow 0 (0.0%) sampled 628488188/628488188
--
Pawel Malachowski
@pawmal80