Search code examples
graphicsfragment-shaderlightingpbr

Physically Based Shading, IBL, Half Vector, and NDotR vs NDotV


I'm trying to figure out some simple concepts about image based lighting for PBR. In many documents and code, I've seen the light direction (FragToLightDir) being set to the reflection vector (reflect(EyeToFragDir,Normal)). Then they set the half vector to the mid-way point between the light and view direction: HalfVecDir = normalize(FragToLightDir+FragToEyeDir); But doesn't this just result in the half vector being identical to the surface normal? If so, this would mean that terms like NDotH are always 1.0. Is this correct?

Here is another source of confusion for me. I'm trying to implement specular cube maps from the app Lys, using their algorithm for generating the correct roughness value to use for mip-level sampling based on roughness (here: https://docs.knaldtech.com/doku.php?id=specular_lys#pre-convolved_cube_maps_vs_path_tracers in the section Pre-convolved Cube Maps vs Path Tracers). In this document, they ask us to use NDotR as a scalar. But what is this NDotR in respect to IBL? If it means dot(Normal,ReflectDir), then isn't that exactly equivalent to dot(Normal,FragToEyeDir)? If I use either of these dot product results, the final result is too glossy at grazing angles (when compared to their more simplistic conversion using BurleyToMipSimple()), which makes me think I'm misunderstanding something about this process. I've tested the algorithm using NDotH, and it looks correct, but isn't this simply canceling out the rest of the math, since NDotH==1.0? Here is my very simple function to extract the mip level using their suggested logic:

float computeSpecularCubeMipTest(float perc_ruf)
{
    //float n_dot_r = dot( Normal, Reflect );
    float specular_power = ( 2.0 / max( EPSILON, perc_ruf*perc_ruf*perc_ruf*perc_ruf ) ) - 2.0;
    specular_power /= ( 4.0 * max( NDotR, EPSILON ) );
    return sqrt( max( EPSILON, sqrt(2.0/( specular_power + 2.0 )))) * MipScaler;
}

I realize this is an esoteric subject. Since everyone is using popular game engines these days, no one is forced to understand this madness! But I appreciate any advice on how to go about this.

Edit: Just to make sure I'm clear, I'm referring to pure image based lighting, with no directional lights, no spot lights, etc. Just a cube map that lights the whole scene, similar to the lighting in apps like Substance Painter and Blender's Viewport shading mode.


Solution

  • I'm not familiar with this particular app, but it looks like you're on the right track here. Part of the advantage of pre-convoluting the cube maps is to customize each pixel to be the light source for a particular reflection vector, so indeed NdotV is identical to NdotR as you've noticed. The R still needs to be calculated, for the texture lookup, so it doesn't matter much which one you use for the dot. There is no such thing as H or NdotH used for IBL lookups; those are only for point lights.

    If the grazing angles look wrong, perhaps there's a Fresnel term missing somewhere? Reflections start to work differently at those angles.

    For what it's worth, the glTF Sample Viewer is using NdotV for its specular IBL lookup.