So I have this object modeled in my 3d program, and I took these photos of the object in the real world, from all the sides and also top-view and bottom-view, using a green screen for segmentation using chroma keying techniques.
How would you apply these orthographic pictures to the object, without human intervention? That is, I don't want to construct the texture map myself by using photoshop, but instead use the pictures as input and let my program work the mapping for me.
Is it even possible? I know that there would be seams and empty spaces for overlapping areas, but this is a secondary problem I will have to solve later on, if I manage to do the first part :P
I thought about cube mapping but then my object would have to be reflectant and specular, sort of metal like, and the reflection would vary when rotating the object.
Also I read about dividing my mesh into tons of tiny triangles then coloring each one of them to the projected color in the corresponding picture, but how do you find out what is the corresponding pixel in the photo?
I can answer part of this I believe. After you divide your mesh into tiny triangles, you want to calculate the projection of this tiny triangle onto the rectangle that is the view that matches your photograph. So you need:
As an alternative to the calculation of these points in software, perhaps you could use a special color mapping where you have a particular color for each triangle, do the projection in opengl with the right options (no blending) and then get a bitmap out of opengl. Then the colors in the bitmap would map to a triangle so you could map the location in the picture to the individual triangles.
I realize this is only a partial answer but hopefully it is useful!
-- EDIT -- Well OpenGL can take a screenshot of itself with the screenshot class. If you have it set up for no lighting, just using glColor3f and set each mini triangle to its own colour then the screenshot will be the projection onto the camera of the 3D object. You use a separate distinct color for every triangle in your mesh. Then you look for that color in the screenshot.
So: 1. Lighting off (it adjusts colours) 2. For each triangle mesh: - set this triangle to an arbitrary but unique colour 3. capture a screenshot 4. For each triangle mesh: - find the x,y coordinates for its colour in the screenshot. - use the x,y coordinates to look at the photograph and select the colour for that mesh - set the triangle's colour to the colour you found
I don't know if this will work well enough - obviously it will depend on the size of the mesh. If it is too small, some of the colours will not show up in the screenshot. If it is too big, you would not be finding individual pixel colours but rather you would be finding a section of a photograph so you would have to then use the normal vector for that triangle to convert that patch into a texture to apply to that triangle.