Search code examples
unity-game-enginevirtual-realitygoogle-vr

Google VR Lens Correction Uses Depth?


I have been looking at the lens correction shader code from the Google GVR SDK for Unity and have been scratching my head as to the use of the z component of the view space position (UNITY_MATRIX_MV, without the perspective transform of UNITY_MATRIX_MVP) in the undistort() functions (this one of the simpler variants):

float r2 = clamp(dot(pos.xy, pos.xy) / (pos.z*pos.z), 0, _MaxRadSq);
pos.xy *= 1 + (_Undistortion.x + _Undistortion.y*r2)*r2;

Given my understand that we want to warp the rendered image in 2d screenspace to counteract distortion that will be applied by lens the screen is viewed through, what on earth are we doing dividing our radius(?) by the linear depth (pos.z) squared? I can conceive that this is in lieu of dividing by w for perspective, but then why would we want to divide by the square of the z component (how would that ever be more correct than simply dividing by z or w) ?


Solution

  • Felt a bit silly in hind sight, as this is just the result of a regular optimisation.

    The division is regular perspective division (but leaving the z coord used for depth buffer/culling as linear, and presumably w should thus be 1.0 to ensure proper depth interpolation). Reorganising the computation presumably was found to save shader cycles and/or accuracy.

    This code is equivalent to foreshortening pos.xy by dividing it by pos.z first, then doing taking the dot product of pos.xy with itself to get its length squared in 2D screenspace (and then clamping it, etc.)