Lets assume that I have a starship with coordinates in code like
public float coords[] = {
-0.025f, -0.04f, -0.1f,
0.025f, -0.04f, -0.1f,
-0.025f, 0.04f, -0.1f,
};
And I need to check collisions with meteor.
So starship has triangle bounding-box and meteor has square bounding box.
So whenever I translate or rotate my starship I use MVP matrix and multiply it on each verticle in verticle shader. BUT my bounding boxes should also moved and rotated.
How can I do it? Should I multiply MVP matrix outside of vertex shader for both - starship coordinates and bounding box coordinates and remove MVP matrix from vertex shader? Or there is another way to move and translate bounding box simultaneously with translating and moving my real objects?
We need to separate two things: collision detection and rendering.
When you render models you usually have some VertexBuffer (data stored in Object Space) and in the shader you transform it by ModelViewMatrix. It would be quite expensive to modify data inside the VertexBuffer just to change position of the model.
On the other hand when you check collision you do it not in shaders but in some physics engine. You can do it on the CPU side for instance. Usually you transform your bounding box by the MVP and then perform collision test. Note that transforming bounding box by the MVP should be quite fast (several vertices for a cube, or quad vs several hundreds for the object itself).
It is important (as you noticed) to have those data synchronized.
some code:
update() {
foreach o in objects
o.calculateModelViewBoundingBox()
testCollisionBetweenObjects()
}
render() {
foreach o in objects
o.render()
}