I am writing a very simple ray-tracer in python using CGKit and RTree. After intersecting the ray with the triangle, I would like to infer the U,V of the intersection point from the U,V's of the vertex. What is the appropriate way for doing this ?
Currently I am using a weighted average of the distance from opposite edge as shown below. Apart from the CGKit related stuff, I have 3 vertices v1,v2,v3 and 3 UV's vt1,vt2,vt3. hit_p is the xyz point on the triangle returned by CGKit intersection.
def extract_hit_vt(tri_geom, tri_vt,hit_p, oa_face):
hit_face = tri_geom.faces[oa_face]
vt1 = np.array(vec3(tri_vt[oa_face * 3]))
vt2 = np.array(vec3(tri_vt[oa_face * 3 + 1]))
vt3 = np.array(vec3(tri_vt[oa_face * 3 + 2]))
v1 = tri_geom.verts[hit_face[0]]
v2 = tri_geom.verts[hit_face[1]]
v3 = tri_geom.verts[hit_face[2]]
d1 = ptlined(v2, v3, hit_p)
d2 = ptlined(v3, v1, hit_p)
d3 = ptlined(v1, v2, hit_p)
hit_vt = (d1*vt1+d2*vt2+d3*vt3)/(d1+d2+d3)
return hit_vt
The main thing to decide is how exactly you want to interpolate UV inside of triangles. Since you said you have only three points, you unlikely may do better than a simple linear interpolation.
In this case you need barycenric coordinates of the point inside of the triangle: http://en.wikipedia.org/wiki/Barycentric_coordinate_system
In short, every point inside of triangle can be represented as weighted sum of its vertices, where every weight is between 0 and 1. The weights can be found by solving 2x2 system of linear equations.
When you have these weights, you can use them to get weighted sum of UV coordinates.