I was following this tutorial http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/ to understand how viewing works but then when I tried to apply it on my iOS app I had so much trouble
so basically what I understood is that:
From a basic iOS tutorial I found the following calculation of the projection matrix
float aspect = fabs(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
_modelViewProjectionMatrix = projectionMatrix;
which I really didn't understand ... how did they come up with 65 for example?
Another tutorial did this:
glViewport(0, 0, self.view.bounds.size.width,self.view.bounds.size.height);
Implementation: my current app only displays a blue screen (basically the color of my cube) which I'm assuming is because the camera is currently at the origin
I have the following data set
static const GLfloat cubeVertices[] = {
-1.0f,-1.0f,-1.0f, // triangle 1 : begin
-1.0f,-1.0f, 1.0f,
-1.0f, 1.0f, 1.0f, // triangle 1 : end
1.0f, 1.0f,-1.0f, // triangle 2 : begin
-1.0f,-1.0f,-1.0f,
-1.0f, 1.0f,-1.0f, // triangle 2 : end
1.0f,-1.0f, 1.0f,
-1.0f,-1.0f,-1.0f,
1.0f,-1.0f,-1.0f,
1.0f, 1.0f,-1.0f,
1.0f,-1.0f,-1.0f,
-1.0f,-1.0f,-1.0f,
-1.0f,-1.0f,-1.0f,
-1.0f, 1.0f, 1.0f,
-1.0f, 1.0f,-1.0f,
1.0f,-1.0f, 1.0f,
-1.0f,-1.0f, 1.0f,
-1.0f,-1.0f,-1.0f,
-1.0f, 1.0f, 1.0f,
-1.0f,-1.0f, 1.0f,
1.0f,-1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
1.0f,-1.0f,-1.0f,
1.0f, 1.0f,-1.0f,
1.0f,-1.0f,-1.0f,
1.0f, 1.0f, 1.0f,
1.0f,-1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
1.0f, 1.0f,-1.0f,
-1.0f, 1.0f,-1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, 1.0f,-1.0f,
-1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, 1.0f,
1.0f,-1.0f, 1.0f
};
This is my setup, very basic from an iOS tutorial
- (void)setupGL {
[EAGLContext setCurrentContext:self.context];
[self loadShaders];
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(cubeVertices), cubeVertices, GL_STATIC_DRAW);
glVertexAttribPointer (GLKVertexAttribPosition,
3,
GL_FLOAT, GL_FALSE,
0,
BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribPosition);
//glBindVertexArrayOES(0);
}
and my drawInRect and update methods
- (void)update {
//glViewport(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
float aspect = fabs(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
_modelViewProjectionMatrix = projectionMatrix;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glDrawArrays(GL_TRIANGLES, 0, 12*3);
}
and my vertex shader
attribute vec4 position;
uniform mat4 modelViewProjectionMatrix;
void main() {
gl_Position = modelViewProjectionMatrix * position;
}
and my fragment shader
void main() {
gl_FragColor = vec4 (0.165, 0.427, 0.620, 1.0);
}
To begin with the answer, what you are looking for and are missing is GLKMatrix4MakeLookAt
. The rest is just if you are interested into a bit deeper understanding.
Your assumption seems to be correct but I think you failed to understand how the matrix system and openGL really work. You do seem to understand how to use it though. So to begin with that what we are generally looking at are 3 matrix components which may then be inserted into the shader as a product or passed as each component to the shader and multiplied there.
The first component is the projection matrix. This component reflects the on screen projection and is usually set as "ortho" or "frustum". The "ortho" is an orthographical projection which means the object will appear the same size no matter the distance. The "frustum" will create an effect that will make the objects appear larger or smaller depending on the distance. In your case you are using "frustum" with a convenience function GLKMatrix4MakePerspective
. The first parameter describes the field of view and in your sample that would be 65 degrees angle, the second one is the aspect ratio which should reflect the screen/view ratio and the last two are clipping planes. Using an equivalent with "frustum" would be:
GLfloat fieldOfView = M_PI_2;
GLfloat near = .1f;
GLfloat far = 1000.0f;
GLfloat screenRatio = 1.0f/2.0f;
GLfloat right = tanf(fieldOfView)*.5f * near; // half of the tagens of field of view
GLfloat left = -right; // symetry
GLfloat top = right*screenRatio; // scale by screen ratio
GLfloat bottom = -top; // symetry
GLKMatrix4MakeFrustum(left, right, bottom, top, near, far);
The second is the view matrix which is generally used as the "camera". To use this one it is easiest to call some form of "lookAt" which in your case is GLKMatrix4MakeLookAt
. This should answer you question "what is the equivalent of this in iOS?".
And the 3rd one is the model matrix which describes the object position in your coordinate system. This is usually used so you can put your model to a desired position, set a specific rotation and scale it if needed.
So how it all comes together is you at some point multiply all matrices and call it something like model-view-projection matrix. This matrix is then used to multiply each vertex position to describe the onscreen projection of it.
The glViewport
has no part of this at all. This function will define what part of the buffer you are drawing to and nothing more. Try to divide all the values to half and see what happens (better then any other explanation).
To just explain a bit what goes on from the maths perspective and the openGL implementation is as follows: The openGL will draw only fragments (pixels) which are inside the box [-1, 1] in all axises. That means there is no magic from the projections to override that but the vertex positions are transformed so the correct values fit in there.
For frustum
instance it will take 4 border values (left
, right
, top
, bottom
), near
and far clipping
planes. This method is defined so that any vertex position with Z
value equal to near
will be transformed to -1 and every position with Z
value equal to far
will be transformed to 1. All in-between will take linear interpolation. As for the X
and Y
they will be scaled depending on that transformed Z
value so that for Z
at 0 will be scaled by 0, Z
at near
by 1.0 and the rest are then extrapolated linearly.
The lookAt
is actually very similar to the model matrix but reversed. If you move the camera backwards it is the same as moving the object forward, if you rotate left the object will appear to be moved and rotated right and so on...
The model matrix will simply transform all the vertex positions using the base vectors and the translation. The relevant parts of this matrix are the top 3x3 part which are the base vectors and the bottom (or right in some implementation) 3x1 vector (1x3) which is the translation. The easiest way to imagine that is a coordinate system defined inside the coordinate system: The zero value (the origin) is at the translation part of the matrix, the X axis is the first row(column) of the 3x3 matrix, Y the second and Z the 3rd. The length of these 3 vectors represent the scale of the respected coordinates... It all fits together.