Is there a way to programmatically get the capabilities of my GPU at runtime using Pyglet?
I am making a game and would like to enable anti-aliasing. On my desktop (w/ nVidia Quadro), this works just fine:
import pyglet
class MyWindow(pyglet.window.Window):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.batch = pyglet.graphics.Batch()
self.circle = pyglet.shapes.Circle(
self.width / 2, self.height / 2, 100,
batch=self.batch
)
def on_draw(self):
self.clear()
self.batch.draw()
if __name__ == '__main__':
config = pyglet.gl.Config(sample_buffers=1, samples=8)
game_window = MyWindow(width=480, height=360, config=config)
pyglet.app.run()
and I get a nice anti-aliased edge on the circle. However, on my laptop (w/ integrated Intel graphics) I just get a white screen that never updates. Setting config = None
makes everything visible again. Because I would like the game to be theoretically playable on any system, I want to selectively disable anti-aliasing for any GPU that doesn't support it. Something like:
import pyglet
class MyWindow(pyglet.window.Window):
...
def GPUSupportsAntialiasing() -> bool:
# magical pyglet and/or OpenGL stuff
if __name__ == '__main__':
if GPUSupportsAntialiasing():
config = pyglet.gl.Config(sample_buffers=1, samples=8)
else:
config = None
game_window = MyWindow(width=480, height=360, config=config)
pyglet.app.run()
Is this possible within Pyglet or another Python graphics module?
Pyglet queries the driver asking for the configurations supported with the settings passed and should choose the best one matching the settings provided.
By default with config=None
, the following settings are provided pyglet.gl.Config(double_buffer=True, depth_size=24)
, with a backup of depth_size=16
if 24 is not supported and picks the best.
That being said, I have an intel GPU and can replicate the the issue with the config pyglet.gl.Config(sample_buffers=1, samples=8)
making the circle disappear in your example.
The trick is to provide at least double buffering: config = pyglet.gl.Config(sample_buffers=1, samples=8, double_buffer=True)
after this the circle shows with AA. Some graphics cards I think are a lot better about no double buffering, but I think the most compatible will be leaving double_buffer=True
.
To answer the original question of determining the max sample size the hardware supports you would have to query the driver directly. Pyglet doesn't offer a built in way, but you can still do it by utilizing OpenGL calls:
import pyglet
from ctypes import c_int, byref
max_samples = c_int()
pyglet.gl.glGetIntegerv(pyglet.gl.GL_MAX_SAMPLES, byref(max_samples))
print(f"maximum samples: {max_samples.value}")