Search code examples
cudanvidiapycuda

Cuda threads per block with multiple GPU's


Using Cuda GPU programming in a college project and just wondering if a GPU has a possible block size of 1024 if you have 2 GPU's does that mean that that block size is doubled? And would this effect the implementation of the program do you need to access the GPU's individually?


Solution

  • I think what you're asking about is the maximum number of threads per block, which exists on a per-GPU basis. This means that even if you have two GPUs each with a maximum 1024 threads per block, the block size remains static.

    So to answer your question, no, block size is not doubled. You would still need to communicate with each GPU individually, unfortunately.

    You can see more about technical specifications such as threads per block here.