Search code examples
gpunvidiaintel

Is there a way to know vendor specific GPU rasterization differences?


Let me clarify a bit. Let's say I just want to render 1 line (or triangle) that would look identical on several different GPUs. If this line is somewhat diagonal, it might be rasterized slightly different, with slightly different "ladder" (let's omit all the antialiasing/interpolation/etc.).

Basically, is there a way to make sure that all the fragments of this line would be placed in the same pixels on different GPUs?

I know for example that there might be differences in how the hardware calculates the distance to the "pixel center", which might be approximated differently, e.g. "0.499999...9" or "0.500000...1" roughly speaking.

For some specific use-case, I can of course tweak the coordinates by some very small offset, but could it be done somewhat automatically? Maybe this kind of thing is described in some docs/specifications? I'm asking this not only to make the rendering identical pixel-wise, but also to make sure this happens not because of some other bug I've got.


Solution

  • There is no way to achieve what you want. The OpenGL Standard doesn't mandate any specific method to rasterize primitives (or even how floating pointer numbers are represented), there is no guarantee that two implementations will result in the same image.

    OpenGL only mandates that the same command in the same state gives the same result *on the same implementation.

    Appendix A of the OpenGL 4.6 Spec states this explicitly:

    The OpenGL specification is not pixel exact. It therefore does not guarantee an exact match between images produced by different GL implementations. However, the specification does specify exact matches, in some cases, for images produced by the same implementation.