The following is a little demo of a problem I also have in my actual project.
My display method looks like this:
@Override
public void display(GLAutoDrawable drawable) {
gl2.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl2.glEnable(GL2.GL_DEPTH_TEST);
gl2.glEnable(GL2.GL_LIGHTING);
gl2.glMatrixMode(GL2.GL_MODELVIEW);
gl2.glLoadIdentity();
gl2.glMatrixMode(GL2.GL_PROJECTION);
gl2.glLoadIdentity();
gl2.glMatrixMode(GL2.GL_MODELVIEW);
glu.gluLookAt(10, 10, 11, 10, 10, 9, 0, 1, 0);
gl2.glMatrixMode(GL2.GL_PROJECTION);
glu.gluPerspective(90, 1, 0.1, 1000.1);
gl2.glMatrixMode(GL2.GL_MODELVIEW);
gl2.glRotated(angle++, 0, 1, 0);
light();
wall1();
wall2();
wall3();
gl2.glFlush();
}
The light is placed in the middle of the scenery:
private void light() {
float[] position = { 10, 10, 20, 1 };
gl2.glLightfv(GL2.GL_LIGHT0, GL2.GL_POSITION, position, 0);
gl2.glEnable(GL2.GL_LIGHT0);
}
The scenery consists of three walls:
private void wall1() {
float[] colors = { 0, 1, 0, 1 };
gl2.glMaterialfv(GL2.GL_FRONT_AND_BACK, GL2.GL_AMBIENT_AND_DIFFUSE, colors, 0);
gl2.glBegin(GL2.GL_QUADS);
for (int i = 0; i < 20; i++) {
for (int j = 0; j < 20; j++) {
gl2.glNormal3d(0, 0, 1);
gl2.glVertex3d(i, j, 0);
gl2.glNormal3d(0, 0, 1);
gl2.glVertex3d(i + 1, j, 0);
gl2.glNormal3d(0, 0, 1);
gl2.glVertex3d(i + 1, j + 1, 0);
gl2.glNormal3d(0, 0, 1);
gl2.glVertex3d(i, j + 1, 0);
}
}
gl2.glEnd();
}
private void wall2() {
float[] colors = { 0, 1, 0, 1 };
gl2.glMaterialfv(GL2.GL_FRONT_AND_BACK, GL2.GL_AMBIENT_AND_DIFFUSE, colors, 0);
gl2.glBegin(GL2.GL_QUADS);
for (int i = 0; i < 20; i++) {
for (int j = 0; j < 40; j++) {
gl2.glNormal3d(-1, 0, 0);
gl2.glVertex3d(20, i, j);
gl2.glNormal3d(-1, 0, 0);
gl2.glVertex3d(20, i + 1, j);
gl2.glNormal3d(-1, 0, 0);
gl2.glVertex3d(20, i + 1, j + 1);
gl2.glNormal3d(-1, 0, 0);
gl2.glVertex3d(20, i, j + 1);
}
}
gl2.glEnd();
}
private void wall3() {
float[] colors = { 0, 1, 0, 1 };
gl2.glMaterialfv(GL2.GL_FRONT_AND_BACK, GL2.GL_AMBIENT_AND_DIFFUSE, colors, 0);
gl2.glBegin(GL2.GL_QUADS);
for (int i = 0; i < 20; i++) {
for (int j = 0; j < 20; j++) {
gl2.glNormal3d(0, 0, -1);
gl2.glVertex3d(20 - i, j, 40);
gl2.glNormal3d(0, 0, -1);
gl2.glVertex3d(20 - i - 1, j, 40);
gl2.glNormal3d(0, 0, -1);
gl2.glVertex3d(20 - i - 1, j + 1, 40);
gl2.glNormal3d(0, 0, -1);
gl2.glVertex3d(20 - i, j + 1, 40);
}
}
gl2.glEnd();
}
Finally, the result looks like this:
You can see that the big wall (wall2
) in the middle is lit up as expected while the two small walls on the left (wall1
) and on the right (wall3
) seem to get maximum light which does not seem to be very realistic to me.
How do I fix this problem?
Thanks in advance!
The deprecated standard light model in OpenGL is super rudimentary. It will use both distance and angle to reduce the amount of light a surface receives from your light source. In this case the far ends of the bigger surface receive less light than the smaller surfaces, because the light is at a sharper angle.
Another problem is that OpenGL will calculate the lighting value per vertex and then interpolate values for each pixel, giving you different results depending on the size and layout of faces and their vertices.
Yes, this looks unrealistic, which is one of the many reasons why you shouldn't use outdated and deprecated features. If you're stuck with using deprecated OpenGL features, you are stuck with such bad lighting.
You can play around with GL_CONSTANT_ATTENUATION
, GL_LINEAR_ATTENUATION
and GL_QUADRATIC_ATTENUATION
to get the best result for a particular rendering scene, or you can try using spot lights instead of positional lights, but you will likely never get a perfect solution. See the docs for glLight.
You could also try to change the size and layout of your faces/vertices, to nudge the interpolation result more towards what you desire.
However, a proper solution would be to implement your own pixel based lighting with shaders or use a third party library that provides proper lighting.