This is a follow-up to an earlier question where I asked how I might batch together many quads and changing lines in WebGL 1.0. I'm trying an alternative approach where I'd like to do the following:
Use a texture the size of the entire window. The color value will act as a depth index. Index into the texture at every fragment using gl_FragCoord.xy
. From my understanding, I wouldn't need any attributes or UV coordinates. I just wish for 0, 0 to be the top left and the bottom right to be at (width, height). (This MIGHT be an XY problem, but I would still like to try the approach for the learning benefit at the very least.)
My motivation: Just in case this IS an XY problem, a tangent to clarify what I am hoping to do: I need to interleave layers of quads with lines. I am hoping to apply a Bresenham algorithm to draw lines to the texture, where I use the color to mark which layer the line is on at a particular pixel coordinate. That way, I can assign quads with indices and compare with the line index baked into the texture. If the line index is greater at a fragment, the line color should be drawn. Otherwise the quad is on top (Maybe this would be too slow to do several frames at a time... Bresenham for each line to write into a texture, re-feed the data to the GPU. WebGL 2.0 probably has some extra depth buffer features that would help with this, but I must use 1.0.).
The issue: ... is that my attempts are yielding strange results. (Note: I am using MDN's tutorial on dynamic textures from video clips for a reference).
To see whether the coordinates are correct, I am setting the colors to random values in groups of 4 (RGBA), where the 4th is always 255 for opacity. What I see is that my entire shape is one color, which suggests that the texture is really going from 0 to 1 and every gl_FragCoord.xy selects the one color at one "pixel" (or I thought a pixel). This could be wrong, but I have no other explanation at the moment. Searches so far don't show examples of using a texture for a basic bitmap.
I will post a few snippets of my code in case that will help. I am using an orthographic projection matrix from gl-matrix.
TEXTURE CREATION:
function createScreenDepthTexture(gl, width, height, scale) { // scale is 1 for now
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// I think that I need to flip y since 0, 0 is in the bottom-left corner by default
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
const level = 0;
const internalFormat = gl.RGBA;
const w = width;
const h = height;
const border = 0;
const srcFormat = gl.RGBA;
const srcType = gl.UNSIGNED_BYTE;
const size = width * height * scale * scale * Uint8Array.BYTES_PER_ELEMENT * 4;
const src = new Uint8Array(size);
// from MDN
function getRandomIntInclusive(min, max) {
min = Math.ceil(min);
max = Math.floor(max);
return Math.floor(Math.random() * (max - min + 1)) + min; //The maximum is inclusive and the minimum is inclusive
}
for (let i = 0; i < size; i += 4) {
// random colors, but I see only one color in the end
src[i ] = getRandomIntInclusive(0, 255),
src[i + 1] = getRandomIntInclusive(0, 255),
src[i + 2] = getRandomIntInclusive(0, 255),
src[i + 3] = 255;
}
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, w * scale, h * scale, border, srcFormat, srcType, src);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
return {texture : texture, src : src, w : width * scale, h : w * scale};
}
TEXTURE SETUP:
...
const textureRecord = createScreenDepthTexture(G.gl, 640, 480, SCALE);
// G is a wrapper containing the gl context and other data
G.gl.activeTexture(gl.TEXTURE0);
G.gl.bindTexture(gl.TEXTURE_2D, textureRecord.texture);
G.gl.uniform1i(G.gl.getUniformLocation(G.program, "u_sampler"), 0);
...
VERTEX AND FRAGMENT SHADERS (several variables unused in this test so I excluded them):
// NOTE my JavaScript projection matrix is:
mat4.ortho(matProjection, 0.0, G.canvas.width, G.canvas.height, 0.0, -1000.0, 1000.0);
// VERTEX
precision highp float;
attribute vec3 a_position;
uniform mat4 u_matrix;
void main() {
v_color = u_color;
gl_Position = u_matrix * vec4(a_position, 1.0);
}
// FRAGMENT
precision highp float;
uniform sampler2D u_sampler;
void main() {
// I thought that this would set the color of this specific pixel
// to be the color of the texture at this specific coordinate,
// but I think that the coordinate systems are off if I see only one color,
// is the sampler set to 0-1? How would I change this correctly?
gl_FragColor = texture2D(u_sampler, gl_FragCoord.xy);
}
What else might I be missing? Also, feel free to tell me if this experiment, in fact, is definitely not the way to go due to performance concerns. I'd rewrite to the texture several frames in a row, repeatedly. Thank you in advance.
// I thought that this would set the color of this specific pixel
// to be the color of the texture at this specific coordinate,
// but I think that the coordinate systems are off if I see only one color,
// is the sampler set to 0-1? How would I change this correctly?
gl_FragColor = texture2D(u_sampler, gl_FragCoord.xy);
Yes texture2D
takes normalized texture coordinates, there is no way to directly access a texture in texel space in WebGL 1(in WebGL 2 you'd use texelFetch), so you need to calculate the normalized texture coordinates(UVs) by dividing gl_FragCoord by the width and height of the texture(in this case your screen/window).
Answering the questions in your comment:
could there be rounding errors
Not really since all math is 32bit floating point which has ample enough precision for that, you might want to make sure to sample the centers of the texels by offsetting your normalized coordinates by half a texel though.
if I were ever to move to WebGL 2, how would this be done there?
Using texelFetch
intead of texture2d
any thoughts on the efficiency of this method?
well I'd think it'll be pretty slow, if I understood you correctly the flow will be like:
10 draw quad with shader reading from your screen texture
20 read back the result
30 update the texture
40 goto 10
if that's the case it's pure poison for performance... so yeah I'd try to make lines work, or draw strips with a line pixel shader on it(using hardware z-buffer, pixel shader to establish pretty, constant width lines).