Search code examples
pythonopengl3dtextureswavefront

In a obj+mtl+jpg 3D file, how to get the 3D coordinate where a specific pixel in the jpg file will be applied?


I have 3 files, a .obj, a .mtl, and a .jpg file.

Is there a way to get, for each (relevant) pixel in the .jpg file, the 3D point where the color is "projected" ?

I mean, this must be calculated somewhere in the process of applying the texture to the model, right ?

The idea is to have an array of the same height and width as the jpg file, but with 3D coordinate inside it.

I'm working in python, I already imported the obj file with texture using PyWavefront(+pyglet.gl) and the example they provided here : https://github.com/greenmoss/PyWavefront/blob/master/pywavefront/texture.py

That might be not relevant for this problem, since I want to calculate the array without displaying anything.


Solution

  • That's not really how the texture is applied. What an obj gives you are UV coordinates for each of your vertices. UV coordinates are 2D vectors that tell you where in the picture a certain vertex is located. All these coordinates make what is called a UV map.

    This is a visual representation of it:

    enter image description here

    This information is in the .obj file. Each line starting with vt describes a vector. And each line starting with f describes a polygon. Each group 3/4/5 describes 3dcoord/TextureCoord/normal.

    In your fragment shader you will sample the texture using the interpolated UV coordinate at this fragment.