I am trying to implement shadow maps in opengl core 3.3. When I send a bias to the texture function in GLSL it doesn't seem to do anything at all. Am I using it wrong?
#version 330
uniform sampler2D diffusetex;
uniform sampler2D normaltex;
uniform sampler2D postex;
uniform sampler2D depthtex;
uniform sampler2DShadow shadowmap;
uniform mat4 shadowmat;
in vec2 uv;
layout (location=0) out vec4 outColor;
void main(){
vec3 normal = normalize(texture(normaltex, uv).xyz);
vec3 world_position = texture(postex, uv).xyz;
vec4 shadowcoord = shadowmat*vec4(world_position, 1);
float shadow = texture(shadowmap, shadowcoord.xyz,0.5);
float luma = max(0.1,shadow);//ambient light
outColor = texture(diffusetex, uv)*luma;
}
I am using linux, the nvidia proprietary drivers and golang. Doubt it has anything to do with it but just in case, there it is.
If you are trying to avoid shadow acne, you need to add that offset to your actual texture coordinates. The optional [bias]
parameter in texture lookup functions is for LOD level computation, it is there to "sharpen" (negative value) or "soften" (positive value) mipmap filtering.
vec4 shadowcoord = shadowmat*vec4(world_position, 1);
shadowcoord.z += 0.5;
Keep in mind that in window-space, 0.5 is 1/2 of your depth range, and probably not a good bias. Typically a good value would be closer to 0.001; 0.5 will probably kill all of your shadows.