Search code examples
netlogoagent-based-modelingrepast-simphony

Agent-Based Simulation: Why Netlogo is running much faster than Java based Repast


Everyone says Jave is a language for large system and engineering project, which runs much faster than most other languages. I just compared it with another Agent-based modelling lanugage - Netlogo and I found Netlog runs FOUR TIMES FASTER than Jave based Repast in the classic wolf-sheep simulation model. Both models are using the same parameters for simulation and run for 5 seconds in real time. Netlogo can simulate more than 8000 time steps whereas Jave Repast can only execute for around 2600 time steps. Why?

enter image description here


Solution

  • If you are comparing Netlogo vs Repast Simphony wolf-sheep predation demos with the default settings, it is not a fair comparison because the Repast model is doing a lot more. The Repast model is performing file-based data logging, chart rendering, 2D display rendering and 3D display rendering. Both the Repast and Netlogo displays have update settings that determine how fast they are rendered relative to the tick count, and the display rendering speed is highly dependent on the GPU.

    To get a better comparison of performance, we need to create a more accurate test environment. I ran the Netlogo demo using default parameters with "view updates" unchecked so the display will not update, but the chart will still update. Running the model for 20,000 ticks takes about 14 seconds. I modified the Repast demo by removing the file logging and closing the 2D and 3D displays and only leaving the chart showing, and ran the model for 20,000 ticks which also takes about 14 seconds. So the performance is exactly the same between Repast and Netlogo for this demo.

    We should also consider that the demo models in both Repast and Netlogo with default parameters are TOY models with limited complexity. Typically in a more complex model that would be used in scientific studies, the agent behaviors are so complex that the individual behavior computation time is an order of magnitude greater than the toolkit framework code time, making these types of comparisons less informative of the toolkit's capability.