Search code examples
modelicadymola

Dymola 2018 performance on Linux (xubuntu)


The issue that I experience is that when running simulations (same IBPSA/AixLib-based models) on Linux I get a significant performance drop (simulation time is about doubled) in comparison to a Windows 8 machine. Below you find the individual specs of the two machines. In both cases I use Cvode solver with equal settings. Compilation is done with VC14.0 (Win) or GCC (Xubuntu).

Is this issue familiar to someone or can anyone help what the reason might be?

Win 8: Intel Xeon @2.9GHz (6 logic processors) 32 GB RAM 64-Bit

Xubuntu 16.04 VM: Intel Xeon @3.7GHz (24 logic processors) 64 GB RAM 64-Bit

Thanks!


Solution

  • In addition to the checklist in the comments, also consider enabling hardware virtualization support if not already done.

    In general gcc tends to produce slower code than Visual Studio. In order to turn on optimization one could try adding the following line:

    CFLAGS=$CFLAGS" -02"

    at the top of insert/dsbuild.sh.

    The reason for not having it turned on by default is to avoid lenghty compilations and bloated binaries. For industrial sized models these are actual issues.