For our multiplatform engine that supports both OpenGL and DirectX9 I am adding support for decals. In OpenGL I can set glPolygonOffset(-1.0f, -1.0f)
to fix z-fighting between the wall and the decals. I want the DirectX version to behave exactly the same, so I call this:
float offsetFloat = -1.0f;
DWORD offsetDWord = *((DWORD*)&offsetFloat);
device->SetRenderState(D3DRS_DEPTHBIAS, offsetDWord);
device->SetRenderState(D3DRS_SLOPESCALEDEPTHBIAS, offsetDWord);
However, this gives me an extremely large depth bias. It seems I need to use extremely small values in DirectX9. However, I can't seem to find how small.
I noticed that in the OGRE engine's source they're dividing by 250000
, but despite the comment I don't quite see where that number comes from. Also, they only divide the constant by that for some reason?
// D3D also expresses the constant bias as an absolute value, rather than
// relative to minimum depth unit, so scale to fit
constantBias = -constantBias / 250000.0f;
__SetRenderState(D3DRS_DEPTHBIAS, FLOAT2DWORD(constantBias));
slopeScaleBias = -slopeScaleBias;
__SetRenderState(D3DRS_SLOPESCALEDEPTHBIAS, FLOAT2DWORD(slopeScaleBias));
So my question: what do I need to pass to DirectX9 to get the exact same result as glPolygonOffset
?
I haven't found an exact number anywhere, but by experimenting I have figured out that to get roughly the same effect in OpenGL and DirectX, I need to divide by 3500000
, instead of the 250000
mentioned above.
If anyone knows the exact number or why it's this, I'd love to hear that, but for practical purposes I think this conclusion will do for me.