For my personal project I need to:
Produce a perspective projection from a 3D objet to a 2D plane (on a point defined in the space, which will be the camera).
Get the exact area/points coordinates of the perspective projectio
Produce the perspective projection of multiple 3D objects where some objects may be behind others.
Render the scene (only one image, there won't be any animation, so no need for a realtime rendering).
For points 1,2 and 4 I think I've found a way to do that using PyGame as described here: http://codentronix.com/2011/04/21/rotating-3d-wireframe-cube-with-python/
But for point 3 I'm kind of stuck, because even if I can get the perspective projections for each of my objects, how can I know what object is really visible and which (full object or part of it) is not?
I really need to exactly know what parts of objects are visible and what parts are not, thus my ultimate goal would be to have a matrix of my screen image with the areas of every object's projection clearly defined.
For example, if the matrix contains all pixels of my screen image, and we have 10 objects, we would get 0 where there is no object, 1 where we can see object n1, 2 where object n2 is visible, etc..
I must add that I'm an accomplished developer in Python and many other languages, but I never did any game-related nor rendering development before.
Can someone help me get on the right tracks?
PS: side-question: if you could also point me to a better optimized implementation of perspective projection and rendering than PyGame, I'd be very interested!
I suggest to use OpenGL for such tasks. Learning basic OpenGL (and this looks very basic) is not that much of an effort. Take a look here (well known tutorial) and here (basic example using Python, Qt and OpenGL). I think with some effort this might take you a day to get started.