Search code examples
webgpu

WebGPU: Changing the output format to non-float types, possible?


For rendering to <canvas>, it seems that navigator.getPreferredCanvasFormat() is the only way to get a valid format.

As per the docs, it will always return either rgba8unorm or bgra8unorm, both f32 types. If I try to set the format to anything other than those, e.g. to rgba8uint, I get GPUCanvasContext.getCurrentTexture: Canvas not configured on render.

Yet the documentation implies that other formats are possible, by stating that

if you don't use the preferred format when configuring the canvas context, you may incur additional overhead, such as additional texture copies, depending on the platform.

Just seems inefficient, even if the final render step happens on GPU anyway. Instead of 1 byte per channel (4B per pixel), have to use 4B per channel = 4x4-byte floats = 16B per pixel. Wasteful?

Happy to accept the texture-conversion hit if I can reduce the bandwidth cost.

Am I missing something here? Is this possible?


Solution

  • For rendering to , it seems that navigator.getPreferredCanvasFormat() is the only way to get a valid format.

    No, it's the way to get the optimal format for speed. You can use either 'rgba8unorm' or 'bgra8norm' and it will work but, if the format you chose does not match the format returned from navigator.getPreferredCanvasFormat() then there may be a performance hit to convert to what the browser/OS needs to composite the canvas with the rest of the page/screen

    As per the docs, it will always return either rgba8unorm or bgra8unorm, both f32 types.

    The f32 there refers to the fact that those formats return f32 values when you sample them, not to their storage size. In other words you can bind those formats to

    var t: texture_2d<f32>;
    

    But you can not bind them to

    var t: texture_2d<u32>;
    

    nor

    var t: texture_2d<i32>;
    

    Just seems inefficient, even if the final render step happens on GPU anyway. Instead of 1 byte per channel (4B per pixel), have to use 4B per channel = 4x4-byte floats = 16B per pixel. Wasteful?

    rgba8unorm and bgra8unorm are 1byte per channel formats. 4bytes per pixel

    For details of size see the Texture Format Capabilities table at the bottom of the spec

    Although, even without looking at the table, most formats are self explanatory. Examples

    • 'rgba8unorm' the 8 = 8 bits per channel (so 1 byte per channel) with 4 channels r, g, b, and a, each channel is read and normalized to an unsigned value between 0 and 1 (that's what unorm means) so 0 in the texture = 0.0 when read. 255 in the texture = 1.0 when read, etc...
    • 'rgba8snorm' the 8 = 8 bits per channel (so 1 byte per channel) with 4 channels r, g, b, and a, each channel is read and normalized to an signed value between -1 and 1 (that's what snorm means) so -128 in the texture = -1.0 and +127 in the texture = 1.0
    • 'rgba16float' the 16 = 16 bits per channel (2 bytes), with 4 channels r, g, b, and a. these values are in 16 bit floating point format
    • 'rgba32float' the 32 = 32 bits per channel (4 bytes), with 4 channels r, g, b, and a. these values are in standard float 32 format. The same format as Float32Array
    • 'rg8uint' the 8 = 8 bites per channel (1 byte) with 2 channels (r, g). The bytes are unsigned 8bit integers. This format can be use with texture_2d<u32> but not texture_2d<f32>
    • ...etc...