Search code examples
veins

TraCIScenarioManagerForker vs veins-launchd


I currently use TraCIScenarioManagerForker to spawn SUMO for each simulation, the "forker" method. However, the official VEINS documentation recommends launching the SUMO daemon separately using the veins-launchd script and then run simulations, the "launchd" method.

Using the forker method makes running simulations just a one command job since SUMO is killed when simulation ends. However, with the launchd method, one has to take care of setting up the SUMO daemon and killing it when simulation ends.

What are the advantages and disadvantages of each method? I'm trying to understand the recommended best practices when using VEINS.


Solution

  • Indeed, Veins 5.1 provides three (four, if you count an experimental one) ways of connecting a running OMNeT++ to SUMO:

    1. assuming SUMO is already running and connecting there directly (TraCIScenarioManager)

    2. running SUMO directly from the process - on Linux: as a fork, on Windows: as a process in the same context (TraCIScenarioManagerForker)

    3. connecting to a Proxy (veins_launchd) that launches an isolated instance of SUMO for every client that connects to it (TraCIScenarioManagerLaunchd)

    4. if you are feeling adventurous, the veins_libsumo fork of Veins offers a fourth option: including the SUMO engine directly in your OMNeT++ simulation and using it via method calls (instead of remote procedure calls via a network socket). Contrast, for example, TraCI based code vs. libsumo based code. This can be orders of magnitude faster with none of the drawbacks discussed below. At the time of writing (Mar 2021) this fork is just a proof of concept, though.

    Each of these has unique benefits and drawbacks:

    1. is the most flexible: you can connect to a long-running instance of SUMO which is only rolled backwards/forwards in time through the use of snapshots, connect multiple clients to a single instance, etc but

      • requires you to manually take care of running exactly as many instances of SUMO as you need at exactly the time when you need them
    2. is very convenient, but

      • requires the simulation (as opposed to the person running the simulation) to "know" how to launch SUMO - so a simulation that works on one machine won't work on another because SUMO might be installed in a different path there etc.
      • launches SUMO in the directory of the simulation, so file output from multiple SUMO instances overwrites each other and file output is stored in the directory storing the simulation (which might be a slow or write protected disk, etc.)
      • results in both SUMO and OMNeT++ writing console output into what is potentially the same console window, requiring experience in telling one from the other when debugging crashes (and things get even more messy if one wants to debug SUMO)
    3. does not suffer from any of these problems, but

      • requires the user to remember starting the proxy before starting the simulations