Search code examples
javajitmicrobenchmarkjmh

How is this JMH microbenchmark skewed, if it is?


The benchmark is quite simple:

@State(Scope.Benchmark)
public class MathPowVsRawMultiplyTest
{
    private static final double VICTIM;

    static {
        VICTIM = new Random(System.currentTimeMillis()).nextDouble();
    }

    double result;

    @Benchmark
    public void mathPow()
    {
        result = Math.pow(VICTIM, 2.0);
    }

    @Benchmark
    public void rawMultiply()
    {
        result = VICTIM * VICTIM;
    }

    public static void main(final String... args)
        throws RunnerException
    {
        final Options options = new OptionsBuilder()
            .include(MathPowVsRawMultiplyTest.class.getCanonicalName())
            .forks(1)
            .warmupMode(WarmupMode.BULK)
            .warmupIterations(1)
            .measurementIterations(1)
            .build();

        new Runner(options).run();
    }
}

Of course, the environment matters, a lot, so here goes:

  • this main() is run through IDEA 15.0.1;
  • Ubuntu 15.10, x86_64;
  • Oracle JDK 1.8u65;
  • machine is a Dell M3800, 16 GiB RAM, 8 CPU cores and the first paragraph from /proc/cpuinfo follows.

The cpuinfo:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 60
model name      : Intel(R) Core(TM) i7-4712HQ CPU @ 2.30GHz
stepping        : 3
microcode       : 0x1c
cpu MHz         : 3235.722
cache size      : 6144 KB
physical id     : 0
siblings        : 8
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs            :
bogomips        : 4589.60
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

And the result of the benchmark:

Benchmark                              Mode  Cnt           Score   Error  Units
MathPowVsRawMultiplyTest.mathPow      thrpt       2342955236.832          ops/s
MathPowVsRawMultiplyTest.rawMultiply  thrpt       2375082332.164          ops/s

What I haven't done is try and see what optimizations the JVM could do for Math.pow(); but the result seems pretty close.

The problem is that I don't know JMH that much, therefore I wonder: is my benchmark completely flawed, or is the CPU/JIT combination that good?


Solution

  • Your benchmark is completely flawed, because compiler is able to constant-fold loads from static final fields. Have you read at least the first few samples, notably JMHSample_10_ConstantFold and JMHSample_08_DeadCode? This is how it is done better (but not actually verified, caveat emptor):

    @State(Scope.Benchmark)
    public class MathPowVsRawMultiplyTest {
        private double v;
    
        @Setup
        public void setup {
            v = new Random(System.currentTimeMillis()).nextDouble();
        }
    
        @Benchmark
        public double mathPow() {
            return Math.pow(v, 2.0);
        }
    
        @Benchmark
        public void rawMultiply() {
            return v * v;
        }
    }
    

    You are also advised to set the timeunit to nanoseconds (see @OutputTimeUnit or -tu), and switch to average time measurement instead of throughput (see @BenchmarkMode or -bm). Also, for the nanobenchmarks like these, you need to verify the difference in the compiled code, available with -prof perfasm, for example.