With Pygame and OpenGL, I plot a red and a green sphere in 3D - close and far in z-direction:
I would like to retrieve the rgb values of this image in the 2D plane of my view. I got stuck with my Pygame and OpenGL approach marked as 1 and 2 in the following code snippet as they return solely black pixels.
import pygame
from pygame.locals import DOUBLEBUF, OPENGL
import OpenGL.GL as GL
import OpenGL.GLU as GLU
x1, y1, z1 = 0, 0, -5
r1 = .2
x2, y2, z2 = 0, 1, -9
r2 = .2
border_x, border_y, border_z = 200, 200, 200
field_of_view_in_deg = 45
aspect_ratio = border_x/border_y
clip_near = .1
clip_far = 200
longitude_parts = 50
latitude_parts = 50
delay = 500
red = (1, 0, 0)
green = (0, 1, 0)
pygame.init()
win = pygame.display.set_mode((border_x, border_y), DOUBLEBUF | OPENGL)
surface = pygame.Surface((border_x, border_y))
GLU.gluPerspective(field_of_view_in_deg, aspect_ratio, clip_near, clip_far)
GL.glEnable(GL.GL_DEPTH_TEST)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glPushMatrix()
GL.glTranslatef(x1, y1, z1)
GL.glColor3fv(red)
GLU.gluSphere(GLU.gluNewQuadric(), r1, longitude_parts, latitude_parts)
GL.glPopMatrix()
GL.glPushMatrix()
GL.glTranslatef(x2, y2, z2)
GL.glColor3fv(green)
GLU.gluSphere(GLU.gluNewQuadric(), r2, longitude_parts, latitude_parts)
GL.glPopMatrix()
# 1
pixels = pygame.PixelArray(surface)
# 2
pixels = GL.glReadPixels(border_x, border_y, border_x, border_y, GL.GL_RGB,
GL.GL_INT)
pygame.display.flip()
pygame.time.wait(delay)
My questions are:
How can I project from 3D to the defined 2D surface? I tried to insert gluUnproject
before glReadPixels
, but I do not entirely understand what this function returns, here:
obj_x=0.0414213539903457, obj_y=0.0414213539903457, obj_z=-0.09999999851062896:
modelview_matrix = GL.glGetDoublev(GL.GL_MODELVIEW_MATRIX)
projection_matrix = GL.glGetDoublev(GL.GL_PROJECTION_MATRIX)
viewport = GL.glGetIntegerv(GL.GL_VIEWPORT)
obj_x, obj_y, obj_z = GLU.gluUnProject(border_x, border_y, 0,
modelview_matrix,
projection_matrix,
viewport)
Concerning 2: How can I tell glReadPixels
to read from the defined surface?
The parameters to glReadPixels
specify the window rectangle which should be read. The first 2 parameters specify the window coordinates of the first pixel that is read from the frame buffer and the 3rd and 4th parameter specify the dimensions of the pixel rectangle (width and height).
The parameters to read the entire window (framebuffer) are 0, 0, border_x, border_y
:
pixels = GL.glReadPixels(0, 0, border_x, border_y, GL.GL_RGB, GL.GL_INT)
Note, you can choose the color values as integer values (GL.GL_BYTE
, GL.GL_INT
) or floating point values in the range [0.0, 1.0] (GL.GL_FLOAT
).
e.g. The following code will print 1.0, because the red color channel of the red point in the middle of the view is read:
pixels = GL.glReadPixels(0, 0, border_x, border_y, GL.GL_RGB, GL.GL_FLOAT)
x, y = 100, 100
r = pixels[x][y][0]
print(r)