Search code examples
c++gpuhlsldirect3d

D3D : hardware mip linear blending is different from shader linear blending


I have a d3d application that renders a mip mapped cubemap in a fullscreen quad pixel shader. I stumbled on a weird behavior, and wrote the following test to illustrate the issue.

This shader outputs the absolute difference between hardware mip map filtering and HLSL equivalent.

TextureCube Tex_EnvMap : register(ps, t0);
SamplerState Sampler_EnvMap : register(ps, s0);

void FragmentMain(in SFragmentInput  fInput,  out SFragmentOutput  fOutput)
{
    // ...
    // Not shown:
    // Mip is an uniform set by the application
    // dir is the sampling direction setup from pixel coord.
    // ...

    float fl = floor(Mip);
    float fr = frac(Mip) ;
    
    float3 c0 = Tex_EnvMap.SampleLevel(Sampler_EnvMap, dir, fl).rgb;
    float3 c1 = Tex_EnvMap.SampleLevel(Sampler_EnvMap, dir, fl + 1.).rgb;
    float3 c = lerp(c0, c1, fr);
   
    float3 cref = Tex_EnvMap.SampleLevel(Sampler_EnvMap, dir, Mip).rgb;
    
    fOutput.outColor = float4 (abs(c-cref), 1);
}

On one computer, this test renders black across all mip values (0 - 10), which is expected. The base application (display of the cubemap) has a perceptually linear blend across all mip values.

On another computer, the test is black on integer mips (obviously), and otherwise renders like this (the two ways of filtering are clearly different). The base application has a perceptually more step-like blending but still, it has some blending. As if the fractional part of mip was smoothstepped.

Pix capture on the machine having the issue shows expected values for my resources:

Sampler:
Filter  ANISOTROPIC
AddressU    CLAMP
AddressV    CLAMP
AddressW    CLAMP
MipLODBias  0.00000f
MaxAnisotropy   16
ComparisonFunc  0
BorderColor 
MinLOD  0.00000f
MaxLOD  10.0000f

Image:
Format: R16G16B16A16_FLOAT
Dimensions: 1024⨯1024
# of Mip Levels: 11
Sample Count: 1
Dimensions: 1024⨯1024
Mip Levels: 0-10
Array Slices: 0-5
ResourceMinLODClamp: 0
  • What could lead to this? I'm looking for some filtering related parameter set by my application that could explain the discrepancy between in-shader filtering and d3d sampler filtering, like anisotropic filtering (but this is not a candidate as I use explicit LOD sampling). Then finding why my application set this differently on 2 hardwares shouldn't be too hard.
  • I don't think that this is a driver issue, but I'm not an expert here. I use some higher level rendering API interface, so I was albe to do the same test with OGL or vulkan and the issue shows too.

Thanks a lot! :-)


Solution

  • I was writing "but anisotropic filtering is not a candidate as I use explicit LOD sampling)". It turns out that this statement it wrong. To my understanding, anisotropic filtering should not affect sampling with textureLOD, however, it seems that in some implementations, it does, ex: https://forum.unity.com/threads/tex2dlod-and-anisotropic-filtering.585955/ I'm still curious why exactly and would appreciate more references.

    Setting the sampler to not use anisotropic filtering solved the issue.