I have a depth texture which is GL_DEPTH24_STENCIL8
( or GL_DEPTH_COMPONENT24
), and I can correctly sample this texture on some devices(iPhone5s iPad1), but fail with some invalid pixels. Following is bound gpu texture (the depth) and the format info captured by xcode :
Note that I've clip the value into [0.999, 1] since the homo depth are mostly in the set. I am sampling the texture and clip the value in my shader also.
uniform sampler2D tex0;
varying mediump vec2 TexCoord0;
void ps_main()
{
float bias = 0.0;
lowp vec4 zb = texture2D(tex0, TexCoord0, bias);
const mediump float mag = 20.0;
mediump float linearz = (zb - 0.999) / (1.0 - 0.999)
gl_FragColor = vec4(linearz, linearz, linearz, 1.0);
}
And this shader gives a wrong result on the devices mentioned above:
The device and driver info is:
Driver: OpenGLES2
Description: Apple A8 GPU
Version: OpenGL ES 2.0 Apple A8 GPU - 77.14
Vendor: Apple Inc.
Width: 2048, Height: 1536, BitDepth: 32
Any clues to this problem? Or some other debug suggestions?
You are relying on more precision than the API guarantees to provide. For example, the variable zb
is lowp
which would only guarantee 1 part in 256 accuracy, but you then try to use in a computation in which needs at least 1 part in 1000 in when computing the value of linearz
.
Try increasing precision to highp
to get above the critical precision threshold.