Search code examples
linuxperformanceprofilingcpu-architectureperf

Why does perf stat not count cycles:u on Broadwell CPU with hyperthreading disabled in BIOS?


Given: Broadwell CPU with hyperthreading disabled in BIOS

[root@ny4srv03 ~]# lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         46 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  44
  On-line CPU(s) list:   0-43
Vendor ID:               GenuineIntel
  BIOS Vendor ID:        Intel
  Model name:            Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz
    BIOS Model name:     Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz
    CPU family:          6
    Model:               79
    Thread(s) per core:  1
    Core(s) per socket:  22
    Socket(s):           2
    Stepping:            1
    CPU max MHz:         3700.0000
    CPU min MHz:         1200.0000
    BogoMIPS:            4399.69
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aper
                         fmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_
                         l3 invpcid_single intel_ppin tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local d
                         therm ida arat pln pts
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):
  L1d:                   1.4 MiB (44 instances)
  L1i:                   1.4 MiB (44 instances)
  L2:                    11 MiB (44 instances)
  L3:                    110 MiB (2 instances)
NUMA:
  NUMA node(s):          2
  NUMA node0 CPU(s):     0-21
  NUMA node1 CPU(s):     22-43
Vulnerabilities:
  Itlb multihit:         KVM: Mitigation: VMX disabled
  L1tf:                  Mitigation; PTE Inversion; VMX vulnerable, SMT disabled
  Mds:                   Vulnerable; SMT disabled
  Meltdown:              Vulnerable
  Mmio stale data:       Vulnerable
  Retbleed:              Not affected
  Spec store bypass:     Vulnerable
  Spectre v1:            Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
  Spectre v2:            Vulnerable, STIBP: disabled, PBRSB-eIBRS: Not affected
  Srbds:                 Not affected
  Tsx async abort:       Vulnerable

which according to The Intel 64 and IA-32 Architectures Software Developer's Manual

If a processor core is shared by two logical processors, each logical processors can access up to four counters (IA32 PMCO-IA32 PMC3). This is the same as in the prior generation for processors based on Nehalem microarchi-tecture. If a processor core is not shared by two logical processors, up to eight general-purpose counters are visible. If CPUID.OAH:EAX[15:8] reports 8 counters, then IA32_PMC4-IA32_PMC7 would cc MSR addresses 0C5H through 0C8H. Each counter is accompanied by an event select MS (IA32_PERFEVTSEL4-IA32_PERFEVTSEL7).

should have 8 performance counters accessible, and cpuid shows exactly that

[root@ny4srv03 ~]# cpuid -1 | grep counters
      number of counters per logical processor = 0x8 (8)
      number of contiguous fixed counters      = 0x3 (3)
      bit width of fixed counters              = 0x30 (48)

However if I try to use perf in the following way (under root account and with kernel.perf_event_paranoid set to -1) I get some strange results

[root@ny4srv03 ~]# perf stat \
  -r 100 \
  -e cycles:u \
  -e instructions:u \
  -e branches:u \
  -e branch-misses:u \
  -e cache-references:u \
  -e cache-misses:u \
  -e faults:u \
  ls>/dev/null

 Performance counter stats for 'ls' (100 runs):

                 0      cycles:u
            668753      instructions:u                                                ( +-  0.01% )
            131991      branches:u                                                    ( +-  0.00% )
              6936      branch-misses:u           #    5.25% of all branches          ( +-  0.33% )
             11105      cache-references:u                                            ( +-  0.13% )
                 6      cache-misses:u            #    0.055 % of all cache refs      ( +-  5.86% )
               103      faults:u                                                      ( +-  0.19% )

        0.00100211 +- 0.00000487 seconds time elapsed  ( +-  0.49% )

which always show cycles:u equal to 0 no matter how many times I run perf (please notice -r 100 parameter) until I remove one of the branches:u, branch-misses:u, cache-references:u, cache-misses:u events. In that case perf works as expected

[root@ny4srv03 ~]# perf stat \
  -r 100 \
  -e cycles:u \
  -e instructions:u \
  -e branches:u \
  -e branch-misses:u \
  -e cache-references:u \
  -e faults:u \
  ls>/dev/null

 Performance counter stats for 'ls' (100 runs):

            614142      cycles:u                                                      ( +-  0.06% )
            668790      instructions:u            #    1.09  insn per cycle           ( +-  0.00% )
            132052      branches:u                                                    ( +-  0.00% )
              6874      branch-misses:u           #    5.21% of all branches          ( +-  0.11% )
             10735      cache-references:u                                            ( +-  0.05% )
               101      faults:u                                                      ( +-  0.06% )

        0.00095650 +- 0.00000108 seconds time elapsed  ( +-  0.11% )

perf also works as expected in these cases

  1. In case of obtaining metrics for cycles event either without modifiers at all or with :k modifier
[root@ny4srv03 ~]# perf stat \
  -r 100 \
  -e cycles \
  -e instructions:u \
  -e branches:u \
  -e branch-misses:u \
  -e cache-references:u \
  -e cache-misses:u \
  -e faults:u \
  ls>/dev/null

 Performance counter stats for 'ls' (100 runs):

           1841276      cycles                                                        ( +-  0.79% )
            668400      instructions:u                                                ( +-  0.00% )
            131966      branches:u                                                    ( +-  0.00% )
              6121      branch-misses:u           #    4.64% of all branches          ( +-  0.40% )
             10987      cache-references:u                                            ( +-  0.16% )
                 0      cache-misses:u            #    0.000 % of all cache refs
               102      faults:u                                                      ( +-  0.18% )

        0.00102359 +- 0.00000649 seconds time elapsed  ( +-  0.63% )

  1. In case hyperthreading is enabled in BIOS and disabled by nosmt kernel parameter
[root@ny4srv03 ~]# perf stat \
  -r 100 \
  -e cycles:u \
  -e instructions:u \
  -e branches:u \
  -e branch-misses:u \
  -e cache-references:u \
  -e cache-misses:u \
  -e faults:u \
  ls>/dev/null

 Performance counter stats for 'ls' (100 runs):

            618443      cycles:u                                                      ( +-  0.39% )
            668466      instructions:u            #    1.05  insn per cycle           ( +-  0.00% )
            131968      branches:u                                                    ( +-  0.00% )
              6529      branch-misses:u           #    4.95% of all branches          ( +-  0.34% )
             11096      cache-references:u                                            ( +-  0.47% )
                 1      cache-misses:u            #    0.010 % of all cache refs      ( +- 53.16% )
               107      faults:u                                                      ( +-  0.18% )

        0.00097825 +- 0.00000554 seconds time elapsed  ( +-  0.57% )

in this case cpuid also shows that there are only 4 performance counters avaiable

[root@ny4srv03 ~]# cpuid -1 | grep counters
      number of counters per logical processor = 0x4 (4)
      number of contiguous fixed counters      = 0x3 (3)
      bit width of fixed counters              = 0x30 (48)

So I'm wondering whether there is a bug in perf or some kind of system misconfiguration. Could you please help?

Update 1

Trying to run perf -d shows that there is NMI watchdog enabled

[root@ny4srv03 likwid]# perf stat \
   -e cycles:u \
   -e instructions:u \
   -e branches:u \
   -e branch-misses:u \
   -e cache-references:u \
   -e cache-misses:u \
   -e faults:u \
   -d \
   ls>/dev/null

 Performance counter stats for 'ls':

                 0      cycles:u
            709098      instructions:u
            140131      branches:u
              6826      branch-misses:u           #    4.87% of all branches
             11287      cache-references:u
                 0      cache-misses:u            #    0.000 % of all cache refs
               104      faults:u
            593753      L1-dcache-loads
             32677      L1-dcache-load-misses     #    5.50% of all L1-dcache accesses
              8679      LLC-loads
     <not counted>      LLC-load-misses                                               (0.00%)

       0.001102213 seconds time elapsed

       0.000000000 seconds user
       0.001134000 seconds sys


Some events weren't counted. Try disabling the NMI watchdog:
    echo 0 > /proc/sys/kernel/nmi_watchdog
    perf stat ...
    echo 1 > /proc/sys/kernel/nmi_watchdog

Disabling it helps to get the expected result

echo 0 > /proc/sys/kernel/nmi_watchdog

[root@ny4srv03 likwid]# perf stat \
   -e cycles:u \
   -e instructions:u \
   -e branches:u \
   -e branch-misses:u \
   -e cache-references:u \
   -e cache-misses:u \
   -e faults:u \
   -d \
   ls>/dev/null

 Performance counter stats for 'ls':

            745760      cycles:u
            708833      instructions:u            #    0.95  insn per cycle
            140122      branches:u
              6757      branch-misses:u           #    4.82% of all branches
             11503      cache-references:u
                 0      cache-misses:u            #    0.000 % of all cache refs
               101      faults:u
            586223      L1-dcache-loads
             32856      L1-dcache-load-misses     #    5.60% of all L1-dcache accesses
              8794      LLC-loads
                29      LLC-load-misses           #    0.33% of all LL-cache accesses

       0.001000925 seconds time elapsed

       0.000000000 seconds user
       0.001080000 seconds sys

But it still does not explain why cycles:u is 0 with nmi_watchdog enabled even if dmesg shows

[    0.300779] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.

Update 2

I found this nice comment in the sources of likwid toolsuite that states

Please be aware that the counters PMC4-7 are broken on Intel Broadwell. They don't increment if either user- or kernel-level filtering is applied. User-level filtering is default in LIKWID, hence kernel-level filtering is added automatically for PMC4-7. The returned counts can be much higher.

So it can explain the behaviour, so now it's interesting to find the origin of this information if os.


Solution

  • This is erratum BDE104, with the NMI watchdog occupying a fixed counter so cycles has to use a programmable counter.


    From Intel's Xeon-D "specification update" (errata) document (I didn't find one for regular Xeon v4)

    BDE104: General-Purpose Performance Monitoring Counters 4-7 Will Not Increment Do Not Count With USR Mode Only Filtering

    Problem: The IA32_PMC4-7 MSR (C5H-C8H) general-purpose performance monitoring counters will not count when the associated CPL filter selection in IA32_PERFEVTSELx MSR's (18AH-18DH) USR field (bit 16) is set while OS field (bit 17) is not set.

    Implication: Software depending upon IA32_PMC4-7 to count only USR events will not operate as expected. Counting OS only events or OS and USR events together is unaffected by this erratum.

    Workaround: None identified.


    The NMI watchdog occupies fixed counter 1, the one that can normally count the cycles event. This makes perf choose a programmable counter for it, apparently picking one of the buggy ones.

    With the NMI watchdog disabled, perf uses fixed counter #1 for cycles. (It apparently supports user/kernel/both masking.)

    I tested on my Skylake system with HT enabled, so 4 programmable counters per logical core, plus the fixed counters.

    • NMI watchdog disabled: cycles+instructions + 4 other events - no multiplexing.
    • NMI watchdog disabled: cycles+instructions + 5 other events - multiplexing. (Numbers like (86.32%) in a new column at the right, indicating how much of the time this event was active for; perf extrapolates from that fraction to the total time.)
    • NMI watchdog disabled: 5 events not including cycles or instructions - multiplexing. (Confirms that cycles and instructions use fixed counters).

    Confirming the limit of 4 arbitrary events plus any of cycles, instructions vs. with NMI watchdog enabled:

    • NMI watchdog enabled: 4 events not including cycles or instructions - no multiplexing, confirming that the NMI watchdog left all 4 programmable counters free

    • NMI watchdog enabled: 4 events not including cycles or instructions - no multiplexing, confirming that the NMI watchdog left all 4 programmable counters free

    • NMI watchdog enabled: 4 events plus cycles - multiplexing, confirming that cycles now has to use a programmable counter, implying that the NMI watchdog used that fixed counter.

    • NMI watchdog enabled: cycles+instructions + 3 other events - no multiplexing, as we expect. Further confirming that cycles becomes one of the events that competes for a programmable counter.

    This is all the same if I use perf stat --all-user or cycles:u.

    For example (with some horizontal whitespace removed for SO's narrow code blocks)

    # with NMI watchdog enabled
    $ taskset -c 0 perf stat --all-user -etask-clock,context-switches,cpu-migrations,page-faults,cycles,instructions,uops_issued.any,uops_executed.thread,idq.mite_uops,idq.dsb_uops    -r1 ./a.out
    
     Performance counter stats for './a.out':
    
                 40.74 msec task-clock            #    0.994 CPUs utilized          
                     0      context-switches      #    0.000 /sec                   
                     0      cpu-migrations        #    0.000 /sec                   
                   119      page-faults           #    2.921 K/sec                  
           165,566,262      cycles                #    4.064 GHz      (61.39%)
           160,597,987      instructions          #    0.97  insn per cycle (83.46%)
           286,675,168      uops_issued.any       #    7.036 G/sec       (85.28%)
           286,258,415      uops_executed.thread  #    7.026 G/sec       (85.28%)
            76,619,024      idq.mite_uops         #    1.881 G/sec       (85.28%)
            77,238,565      idq.dsb_uops          #    1.896 G/sec       (82.77%)
    
           0.040990242 seconds time elapsed
    
           0.040912000 seconds user
           0.000000000 seconds sys
    
    $ echo 0 | sudo tee  /proc/sys/kernel/nmi_watchdog
    0
    $ taskset -c 0 perf stat --all-user -etask-clock,context-switches,cpu-migrations,page-faults,cycles,instructions,uops_issued.any,uops_executed.thread,idq.mite_uops,idq.dsb_uops    -r1 ./a.out
    
     Performance counter stats for './a.out':
    
                 45.01 msec task-clock            #    0.992 CPUs utilized          
                     0      context-switches      #    0.000 /sec                   
                     0      cpu-migrations        #    0.000 /sec                   
                   120      page-faults           #    2.666 K/sec                  
           177,494,136      cycles                #    3.943 GHz                    
           160,265,384      instructions          #    0.90  insn per cycle         
           287,253,352      uops_issued.any       #    6.382 G/sec                  
           286,705,343      uops_executed.thread  #    6.369 G/sec                  
            78,189,827      idq.mite_uops         #    1.737 G/sec                  
            75,911,530      idq.dsb_uops          #    1.686 G/sec                  
    
           0.045389998 seconds time elapsed
    
           0.045165000 seconds user
           0.000000000 seconds sys
    

    https://perfmon-events.intel.com/broadwell_server.html says there's a 3rd fixed counter for CPU_CLK_UNHALTED.REF_TSC. So it's separate from the counter that counts INST_RETIRED.ANY (counter #0) or CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.THREAD_ANY (counter #1).

    ref_tsc is fixed frequency, not core clock cycles; it might be better if the NMI watchdog could use that, since I expect it's much less widely used. The cycles even is CPU_CLK_UNHALTED.THREAD on Intel CPUs, counting core clock cycles while this logical core is active. Perf counts it by default.