Search code examples
c++macosopenglprojectionglm-math

OpenGL 1 pixel high texture rendering difference Windows/Linux vs OS X


I need help understanding why I need to do a specific change to make my OpenGL project work on OS X (2019 Macbook), while without the change it works perfectly on Windows and Linux, on both ATI and NVIDIA hardware.

At some point I'm rendering to a frame buffer that is 1024 pixels wide and 1 pixel high. I need straightforward orthographic projection, so for my projection matrix I use:

glm::ortho(0.f, (float)LookupMapSize, 1.f, 0.f)

With this projection matrix on Windows and Linux, I render my line geometry and it works as expected, all pixels are written to.

On OS X however, I initially saw nothing ending up in my framebuffer, just the color I cleared it to with glClearColor and glClear. Suspecting a shader issue I set the fragment output to vec4(1) expecting an all white result, but I still saw nothing but the clear color in my framebuffer. Depth testing, blending, culling and stencils were not an issue, so it had to be that my matrices were wrong. After much fiddling, I finally figured out that all I had to change my projection matrix to, was this:

glm::ortho(0.f, (float)LookupMapSize, 0.f, 1.f)

But why? Where does this difference come from? So in Windows/Linux bottom is at 1.f, and top is at 0.f, while in OS X it's exactly the other way around. If I use the "OS X" matrix on Windows/Linux, I get the exact same bug I initially had on OS X.

Rather than just keeping this platform specific change in my code, I would like to understand what's going on.

edit: I check all my OpenGL calls automatically (glGetError), nothing returns any errors anywhere. Unfortunately the OpenGL debug functions (glDebugMessageCallback) are not available on OS X...

edit: I verified that on both OSX and Linux/Windows the results of glm::ortho are identical. So my input into OpenGL is the same on all platforms.


Solution

  • OpenGL is not specified as a pixel-exact rendering API, and GPUs and different drivers (even on the same GPU) do not produce identical output with identical inputs. However, the OpenGL specification actually makes hard requirements the implementators must fillfill, and you as user of the API can rely on.

    In your case, if you set up a 1 pixel high viewport with an ortho matrix setting the y range from 0 to 1 means that y=0 will be the bottom edge of your pixel row, and 1 will be the top edge. If you draw a line exactly on an edge between two pixels, the OpenGL specification does not specify into which direction implmentations must "round" into this case, they just must do the same way of rounding all the time.

    So this means that if the two options you have is y=0 and y=1, one of the two will *not draw the line (because it technically lies outside of your framebuffer), but which one is completely implementation-specific.

    However, drawing lines on the edges between pixels on purpose is a bad idea, especially if one has some very specific pixels in mind to be filled by this. Setting the vertices at the center of the pixels you want to fill would make most sense, and this is y=0.5.

    However, for a pass which just generates a widthx1 LUT, I don't see the need to set up any sort of transfrom matrices, you can work in untransformed clipsace and just draw from (-1,0,0,1) to (1,0,0,1). and y=0 here is fine, as that is exactly the center of your viewport.