I am converting some code from HSLSL and XNA to CG and OpenGL.
The code is for rendering volume data. But volume data is not also sampled using the same distance in each dimension, for example (0.9f, 1f, 1f). So a scale factor need to be applied.
In the XNA and HLSL example, they do the following:
mul(input.Position * ScaleFactor, WorldViewProj);
Where WorldViewProj is passed into the shader.
In OpenGL, I was under the impression that glstate.matrix.mvp was ModelViewProjection, where ModelView is World * View. Clearly I am wrong as when I do the following, nothing is drawn.
output.Position = mul( input.Position * scale, glstate.matrix.mvp);
The volume is been rendered with glMatrixMode set to GL_MODELVIEW. Will I have to create my own matrices? If so, any good tutorials? :D
The volume is been rendered with glMatrixMode set to GL_MODELVIEW. Will I have to create my own matrices? If so, any good tutorials? :D
glMatrixMode is kind of like a with
statement for the other matrix manipulation functions. So
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslate(...);
is to be interpreted like
modelview_matrix.load_identity();
modelview_matrix.translate(...);
and so on.
Furthermore see the answer of @Tobias Schlegel. Shaders get their "constant" input in form of so called Uniforms. Older versions of OpenGL pass on the fixed function state, like the modelview matrix. Newer OpenGL versions (OpenGL-3 core and later) depreceated all the built in matrix manipulation stuff. Instead the user is expected to keep track of the transformation piple and to supply all required matrices through self defined uniforms. This also allows to emulate the DirectX behaviour.