I am a hobbyist programmer trying to learn graphics programming using WebGPU and JavaScript. I have a question about how @builtin(position)
works, as it doesn't do-so how I would expect.
My current mental model of the rendering pipeline is the naive one given in practical all into tutorials to graphics programming (regardless of api opengl/webgl, etc...), being that one prepares some vertex (and possibly colour and texture) data into buffers, and then writes a couple of shaders (at least the fragment and vertex) which take the data and render it however you specify in the shader.
my confusion / broken mental model is based upon how the @bultin(position)
data changes between the vertex and fragment shaders. I have from the WGSL draft spec §16 that the usage of @bultin(position)
is as follows in the vertex and fragment stages respectively:
Output position of the current vertex, using homogeneous coordinates....
Framebuffer position of the current fragment in framebuffer space...
I have put ellipses to denote that there is more said: it is just not relevant to my query.
With that in mind I expect to carry position data through vertex shader into to fragment shader using @builtin(position)
and receive at the fragment stage the position at the current fragment but instead receive a (seemingly) random vertex's position form the buffer.
I have an example piece of code that showcases my confusion, given below. It is just a simple vertex and fragment shader:
struct vertex_stage_struct {
@location(0) position_data : vec4<f32>,
}
struct fragment_stage_struct {
@builtin(position) position_data : vec4<f32>,
@location(0) other_position_data : vec2<f32>
}
@vertex
fn ${vertex_entry_point_identifier}(input : vertex_stage_struct) -> fragment_stage_struct {
var output : fragment_stage_struct;
output.position_data = input.position_data;
output.other_position_data = vec2<f32>(input.position_data[0],input.position_data[1]);
return output;
}
@fragment
fn ${fragment_entry_point_identifier}(input: fragment_stage_struct) -> @location(0) vec4<f32> {
return vec4<f32>(0.5 * (1 + input.position_data[0]), 0.5 *(1 + input.position_data[1]),0,1);
};
Here ${fragment_entry_point_identifier}
and ${vertex_entry_point_identifier}
are template strings for whatever you wish to name the entry-points in your pipline-config.
I shan't show the rest of my JavaScript code that sets up the scene and renders onto a canvas because I deem it irrelevant to my question. However, I will add it on request. The crux is that I set up two triangles with vertices [-1,-1,0,1],[1,-1,0,1],[-1,1,0,1],[1,1,0,1]
drawn via triangle strip to produce a single rectangle covering the entire canvas. I was originally testing how to implement textures using the webGPU API to give data to render onto surfaces but for my example I switched to colouring in the fragments based on position because it simplifies my example and arguably demonstrates the problem better.
So my expectation / model of what should happen in a render pass is as follows:
However, what I see instead is as follows: a completely yellow canvas bar the first column of pixels on the left which is fully green and the top row of pixels that is fully red. It seems then that the @builtin that gets passed then to the fragment stage is just one of the vertexes used in the tessellation for the vast majority of the render.
Now, if one changes the single (return) statement of the fragment shader to
return vec4<f32>(.5 * (1 + input.other_position_data[0]),.5 *(1 + input.other_position_data[1]),0,1)
One gets the expected output. This is confusing to me because I have, done the exact same thing to the other_position_data
component as to the position_data
component. This is clearly due to the semantics of how @builtin(position) works, but it is completely counterintuitive to what I expect and I want to know why. I know that something correct is happening because my shader has been validated my my device - but why it does the thing it does is unkown to me.
If that is the way things are with respects to WGSL then that is fine and I can build shaders either way. I would like to learn to write good GSL code and it feels like I have to add a random extra variable at a new location just to pass on position data correctly to the fragment when there is a builtin location feels a little unwieldy. Is this the way it should be done in WGLS (i.e. is what I have written in the second case idiomatic WGSL for colouring w.r.t. position?)
To reiterate, the question. I would like to know what definitions / program semantics in the spec cause this difference in behaviour between the two ways of passing the position data through the pipeline, and furthermore - if the code I am trying to write takes the correct approach to colouring geometry by fragment position in the first place.
If you take a look at this page there is your answer : https://webgpufundamentals.org/webgpu/lessons/webgpu-inter-stage-variables.html
To put it simply, @builtin(position) does not have the same meaning in the context of the vertex shader and the context of the fragment shader. As it is said on the linked page, in the vertex shader, @builtin(position) is the output that the GPU will draw, whereas in the fragment shader, it is the pixel coordinate that the fragment shader is currently coloring.
Moreover, if you want to share data between your two shader you must do it by location index which is indicated by the @location(0) and explain why swapping to other_position_data act as you expected.
When you use the builtin, you get the pixel coordinate that you are currently drawing, you get green on the left because the x axis as 0 for value and you get red on the top bar because the y axis as 0 for value then you get yellow because your vec4 is something like this : vec4(1 //x-axis different than 0//,1 //y-axis different than 0//, 0, 1).