Search code examples
openglgpucpurendermode

What tasks in 3d program does CPU / GPU handle?


When rotating a scene in a 3d modeling interface, which part of such task is CPU responsible for and which part the GPU takes on? (mesh verticies moving, shading, keeping track of UV coords - perhaps offsetting them, lighting the triangles and rendering transparency correctly)

What rendering mode is normally used via such modeling programs (realtime) - immediate or retained?


Solution

  • First and formost the GPU is responsible for putting points, lines and triangles to the screen. However this involves a certain amount of calculations.

    The usual pipeline is, that for each vertex (which is a combination of attributes, that usually includes, but is not limited to position, normals, texture coordinates and so on), the vertex position is transformed from model local space into normalized device coordinates. This is in most implementations a 3-stage process.

    1. transformation from model local space into view=eye space – eye space coordinates are later reused for things like illumination calculations
    2. transformation from view space to clip space, also called projection; this is determined which part of the view space will later be visible in the viewport; it's also where affine perspective is introduced
    3. mapping into normalized device coordinates, by coordinate homogenization (this later step actually creates perspective if an affine projection is used).

    The above calculations are normally carried out by the GPU.

    When rotating a scene in a 3d modeling interface, which part of such task is CPU responsible for and which part the GPU takes on?

    Well, that depends on what kind of rotation you mean. If you mean an alteration of the viewport but nothing in the scene input data is actually changed. The only thing that gets altered is a parameter used in the first transformation step. This parameter is normally a 4×4 matrix. When rotating the viewport a new modelview transformation matrix is calculated on the CPU. This matrix is then passed to the GPU and the whole scene redrawn.

    If however a model is actually modified in a modeller, then the calculations are usually carried out on the CPU.

    (mesh verticies moving, shading, keeping track of UV coords - perhaps offsetting them, lighting the triangles and rendering transparency correctly)

    In an online renderer this normally done mostly by the GPU, but certain parts may be precalculated by the CPU.

    It's impossible to make a definitive statement, because how the workload is shared depends on the actual application.