Search code examples
openglgraphicsdirectxprojection

what program is responsible for projection in computer graphics


by projection I mean, taking 3d model data, camera data, etc, and projecting the scene into a flat 2d image that would be displayed.

Does directx / opengl do this? Do they actually do it for the game/graphics designer, or do you have to implement the 3d to 2d projection with opengl / directx? This is where I am confused.


Solution

  • Well, this depends on what kind of computer graphics we're talking. For example the images you can usually see in visual effects (think ILM, Digital Domain, Weta Digital) and computer animation movies (think Pixar, Dreamworks Animation, Disney Animation, Blu Sky) are generated offline by dedicated renderer software. Such offline renderers can be implemented in a number of ways. There may be rasterizers (not unlike OpenGL or DirectX 3D) or raytracers (completely different) and there are even hybrids.

    OpenGL and DirectX 3D are APIs meant for realtime image generation by means of rasterization and I'm going to explain those. First thing you must understand is, that from the point of view of OpenGL and DirectX 3D there are no 3d models, no camera, no scene. For them there is only a framebuffer (a 2d canvas) to which they rasterize (=draw) merely points, lines or triangles. When rasterizing such primitives all sorts of input data can influence the process, for example images (textures) or parameters the control a simulated illumination process.

    With current systems the process is 4-folded.

    1. Geometry data is turned into positions in the so called clip space. This is done by a stage called the vertex shader. Effectively what it does is, that for each corner of a primitive it takes an input vector (could be anything) and calculates a position in clip space from that. The projection you asked about, happens in this very step – but there's so much more the whole thing. Please read on.

    2. (optional) Tesselation shading is applied to the primitives defined by the positions generated in the previous step. Tesselation can be thought as kind of a program controlled refinement to make things appear smoother or with higher detail.

    3. (optional) Geometry shading can turn the primitives generated so far into new primitives. This is usefull for example to generate strands of simulated hair

    4. Fragment shading calculates the final color for each fragment covered by the primitive rasterized. There's at least one fragment to a pixel, but with antialiasing there may be much more.

    Now where do OpenGL and DirectX 3D enter the picture here? Well, before OpenGL-2 and DirectX-8 the steps described above were hardcoded into them, so in all practical terms they did it. Today modern OpenGL and DirectX 3D are more like a versatile graphics toolkit, which a skilled programmer can use to implement his own graphics generation pipelines. Steps 1 to 4 are done by so called shader programs, which must be written by the programmer who uses OpenGL or DirectX 3D to make the whole thing work.