I know that texelFetch
performs a lookup using texture coordinates with range [0, textureSize]
, and textureLod
with range [0,1]
both with explicit level of detail.
But I have noticed that textureLodOffset
requires an offset as ivec2
, int
and so on. This seems to be the case for texelFetchOffset
as well.
I can see why this makes sense for texelFetch
, but I am not sure how it relates to textureLod
.
I am used to computing the offset coordinate manually in the shader with something like coord.xy + 1/textureSize()
for textureLod
. I don't think this is causing any issues with performance etc, but I would like to know how we can use textureLodOffset
with integer coordinates as specified in the documentation what makes their use different from texelFetchOffset
.
The difference between textureLodOffset
and texelFetchOffset
are the texture coordinates.
The texture coordinates of textureLodOffset
are in rage [0, 1]. But the unit of the texture coordinates of texelFetchOffset
are texels and the range of the coordinates depends on the size of the texture. In compare to the *Fetch*
functions, textureLodOffset
respects texture filtering and sampling.
The *Fetch*
functions performs a lookup of a single texel from an unambiguous texture coordinate. See OpenGL 4.6 API Compatibility Profile Specification - 11.1.3.2 Texel Fetches
In both cases, the offset argument is integral because it is meant to access neighboring texels. See further OpenGL Shading Language 4.60 Specification - 8.9.1. Texture Query Functions.