Search code examples
benchmarkingnetlogomicrobenchmark

Timing discrepancy between in Netlogo


Can anyone explain why there is a performance difference between the following two segments? It's statistically significant that the second timer call reports a smaller number than the first timer call. My only thoughts would be that Netlogo could be caching the turtles in memory. Is this the expected behavior or is there a bug?

to setup
  clear-all
  crt 100
  let repetitions 10000

  ;;Timing assigning x to self
  reset-timer
  repeat repetitions
  [
   ask turtles
   [
     let x self 
   ] 

  ]
  show timer

  ;;Timing assigning x to who of self
  reset-timer
  repeat repetitions
  [
    ask turtles
    [
     let x [who] of self 
    ]
  ]
  show timer
end

Solution

  • This isn't because of anything in NetLogo itself, but rather because NetLogo runs on the JVM. The JVM learns to optimize code the more it runs it as part of its just-in-time compilation (JIT).

    By the time the second segment is run, the JVM has had time to optimize many code paths that the two segments have in common. Indeed, switching the order of the segments, I got the following results:

    observer> setup
    observer: 0.203
    observer: 0.094
    observer> setup
    observer: 0.136
    observer: 0.098
    observer> setup
    observer: 0.13
    observer: 0.097
    observer> setup
    observer: 0.119
    observer: 0.095
    observer> setup
    observer: 0.13
    observer: 0.09
    

    Now the let x self code is faster (it's now the second thing that runs)! Notice also that both times decrease the more I ran setup. This is also due to the JVM's JIT.

    Similarly, if I turn off view updates and run your original code, I get:

    observer> setup
    observer: 0.088
    observer: 0.071
    observer> setup
    observer: 0.094
    observer: 0.072
    observer> setup
    observer: 0.065
    observer: 0.075
    observer> setup
    observer: 0.067
    observer: 0.071
    observer> setup
    observer: 0.067
    observer: 0.068
    

    The let x self code starts out slower (for the reason above) and then becomes about the same speed, as one might expect. There are many possible reasons as to why this only happens with view updates off. NetLogo is doing a lot less with view updates off

    The JVM's JIT is extremely optimized, yet complicated, and it can be hard to reason about. There's a lot to consider if you want to write truly correct micro-benchmarks.