Search code examples
linuxcudax11pycuda

CUDA/PyCUDA: Which GPU is running X11?


In a Linux system with multiple GPUs, how can you determine which GPU is running X11 and which is completely free to run CUDA kernels? In a system that has a low powered GPU to run X11 and a higher powered GPU to run kernels, this can be determined with some heuristics to use the faster card. But on a system with two equal cards, this method cannot be used. Is there a CUDA and/or X11 API to determine this?

UPDATE: The command 'nvidia-smi -a' shows a whether a "display" is connected or not. I have yet to determine if this means physically connected, logically connected (running X11), or both. Running strace on this command shows lots of ioctls being invoked and no calls to X11, so assuming that the card is reporting that a display is physically connected.


Solution

  • There is a device property kernelExecTimeoutEnabled in the cudaDeviceProp structure which will indicate whether the device is subject to a display watchdog timer. That is the best indicator of whether a given CUDA device is running X11 (or the windows/Mac OS equivalent).

    In PyCUDA you can query the device status like this:

    In [1]: from pycuda import driver as drv
    
    In [2]: drv.init()
    
    In [3]: print drv.Device(0).get_attribute(drv.device_attribute.KERNEL_EXEC_TIMEOUT)
    1
    
    In [4]: print drv.Device(1).get_attribute(drv.device_attribute.KERNEL_EXEC_TIMEOUT)
    0
    

    Here device 0 has a display attached, and device 1 is a dedicated compute device.