Search code examples
performanceopenglrender-to-texture

Precompute Texture for entire model, OpenGL


A little background. I currently have an application that does a whole lot of fancy calculations to compute textures on the fly to display on a building model. Think heatmaps, contour lines, etc. I do all of this in place and all is fine and dandy. Now I'm trying to get things to be as performant as possible because I need to be able to demo the application on a low-end machine. The way I'm trying to do this is by doing all of the texture generations for the whole model ahead of time and just render the scene statically. This should theoretically make the whole thing run incredibly smoothly.

My question then, is there a way that I can design the vertex shader such that instead of providing visible fragments to the fragment shader, I place every fragment based on its texture coordinate? Assume I have a fully uv-mapped model. This way I can render a texture that can be directly mapped onto the model at runtime. Is this possible?

Edit: I was asked to add detail to this so I'll try my best. What I want is to "prebake" a texture such that on runtime all I have to do is a texture lookup for each fragment rather than an expensive texture generation sequence. In this way, I was asking if it is possible to design a vertex shader which, instead of passing fragments normally to the fragment shader, pass them based on the texture coordinate of the model. Thus, my fragment shader will receive every point on the model rather than only those which are visible.


Solution

  • Create an UV unwrapping of the model, as you'd do for a static texture. Then to generate the texture create a vertex shader, that sets gl_Position to the UV coordinate (of course you'll have to map the [0, 1]² UV range to the [-1,1]² XY range for gl_Position, and send all the other uniforms as usual. Using that you can bake the texture for the model with your existing fragment shader, by rendering to texture.