I am testing the RandomWaypointMobility with a constrained area minX=-3000m, maxX=3000m, minY=-3000m and maxY=3000m
. The @display
strings sets bgp=6000,6000
. The result is that nodes in the negative part of the coordinate system are rendered outside the display/canvas area.
Are there some parameters I can use to tell OMNeT++/INET that origo for the coordinate system is at the center of the display/canvas? I have tried
*.visualizer.sceneVisualizer.sceneMaxX = 3000m
*.visualizer.sceneVisualizer.sceneMinX = -3000m
*.visualizer.sceneVisualizer.sceneMaxY = 3000m
*.visualizer.sceneVisualizer.sceneMinY = -3000m
*.visualizer.sceneVisualizer.sceneMaxZ = 3000m
*.visualizer.sceneVisualizer.sceneMinZ = -3000m
but it doesn't work as I hoped for.
I realize that for RandomWaypointMobility I can just use a constrained area with positive coordinates only, which would keep objects within the canvas. However, my next task is to pull in mobility traces that include negative coordinates. Do I need to manually shift all coordinates so they become positive and stay within the canvas/display, or is there a smarter way of doing things?
Any hints appreciated!
Thanks, Dragos
What you set is in fact bgb=6000,6000
which sets the size of the module. There were indeed plans to add a tag called bgp
directly into OMNeT++ which would introduce an offset, but at the end it was not implemented. The reason is that once you go down into that rabbit hole, you want to implement also scaling and then rotation etc. So the default display string based visualization left as simple as possible and all these transformation stuff was left for the model code.
So indeed, SceneCanvasVisualizer
in INET has a viewScale
and viewTranslation
parameter that can be used for these purposes.