Search code examples
openglgraphics3dcollision-detection

OpenGL : Suggestion on handling collision detection and vertex data?


I was thinking how to do collision detection in my OpenGL application and came to conclusion that I should keep two copies of my geometry data.

1)First copy : Only for OpenGL rendering purposes.This can be saved as VBO,Vertex Indices.(saved in GPU?)

2)Second copy :I save in client side(in a class ,say 3DEntity) which I perform all tests including bounding-box collision test,ray casting,

So, after I load a mesh data (say from an OBJ file), first I prepare the "first copy" and then using "vertex indices" I prepare the "second copy". (For example: If my mesh is a simple cube ,my "first copy" will have 8 vertices,and use Vertex Indices to render it properly.But my "second copy" will have total 36 vertices.Because I have to do ray-casting on triangles)

I keep a transformation matrix inside the "3DEntity" to keep states like Position,Rotation..etc of my "3d Entity". So in psuedo code;

class 3DEntity {
 Vertex[] verticesForPhysics;
 Matrix tranformationMatrix;
}

I keep the "verticesForPhysics" values fixed.(Means, its always in model coordinate system).So When I want to move,rotate my entity I simply change "transformationMatrix".

When doing tests like collision detection , I make a temporary copy of vertices again,by multiplying "verticesForPhysics" by "transformationMatrix" giving the vertices in World coordinates.

Vertex[] verticesForPhysicsInWorld=transformationMatrix * verticesForPhysics;

Now I do my tests using these "verticesForPhysicsInWorld".

Is this the correct way of handling this ? I am worried about data redundancy by keeping two copies and making another temporary copy for collision detection tests. How do other OpenGL game engines handle this ?


Solution

  • It is pretty normal to have multiple copies of vertex data, so don't worry about it :-)

    For example you might have:

    • Untransformed vertex data that describes your geometry
    • Transformed vertex data during rendering (possibly done by the GPU)
    • Simplified, transformed data used for collision detection (e.g. AABB trees) - this is much more efficient than working with full transformed vertex data for collision detection!
    • When needed, transform the vertex data for more detailed collision detection, but only after you have determined that two objects might be colliding due to AABB overlap. You don't want to do this for every object every frame!

    Note that it is also common that the vertex data / geometry used for physics is different from that used for rendering. Often you can get away with simpler objects for collision detection (e.g. spheres, cylinders)