Search code examples
cudagpunvidiagpu-shared-memory

How to properly coalesce reads from global memory into shared memory with elements of type short or char (assuming one thread per element)?


I have a questions about coalesced global memory loads in CUDA. Currently I need to be able to execute on a CUDA device with compute capability CUDA 1.1 or 1.3.

I am writing a CUDA kernel function which reads an array of type T from global memory into shared memory, does some computation, and then will write out an array of type T back to global memory. I am using the shared memory because the computation for each output element actually depends not only on the corresponding input element, but also on the nearby input elements. I only want to load each input element once, hence I want to cache the input elements in shared memory.

My plan is to have each thread read one element into shared memory, then __syncthreads() before beginning the computation. In this scenario, each thread loads, computes, and stores one element (although the computation depends on elements loaded into shared memory by other threads).

For this question I want to focus on the read from global memory into shared memory.

Assuming that there are N elements in the array, I have configured CUDA to execute a total of N threads. For the case where sizeof(T) == 4, this should coalesce nicely according to my understanding of CUDA, since thread K will read word K (where K is the thread index).

However, in the case where sizeof(T) < 4, for example if T=unsigned char or if T=short, then I think there may be a problem. In this case, my (naive) plan is:

  • Compute numElementsPerWord = 4 / sizeof(T)
  • if(K % numElementsPerWord == 0), then have thread K read the next full 32-bit word
  • store the 32 bit word in shared memory
  • after the shared memory has been populated, (and __syncthreads() called) then each thread K can process work on computing output element K

My concern is that it will not coalesce because (for example, in the case where T=short)

  • Thread 0 reads word 0 from global memory
  • Thread 1 does not read
  • Thread 2 reads word 1 from global memory
  • Thread 3 does not read
  • etc...

In other words, thread K reads word K/sizeof(T). This would seem to not coalesce properly.

An alternative approach that I considered was:

  • Launch with number of threads = (N + 3) / 4, such that each thread will be responsible for loading and processing 4/sizeof(T) elements (each thread processes one 32-bit word - possibly 1, 2, or 4 elements depending on sizeof(T)). However I am concerned that this approach will not be as fast as possible since each thread must then do twice (if T=short) or even quadruple (if T=unsigned char) the amount of processing.

Can someone please tell me if my assumption about my plan is correct, i.e. it will not coalesce properly?

Can you please comment on my alternative approach?

Can you recommend a more optimal approach that properly coalesces?


Solution

  • You are correct, you have to do loads of at least 32 bits in size to get coalescing, and the scheme you describe (having every other thread do a load) will not coalesce. Just shift the offset right by 2 bits and have each thread do a contiguous 32-bit load, and use conditional code to inhibit execution for threads that would operate on out-of-range addresses.

    Since you are targeting SM 1.x, note also that 1) in order for coalescing to happen, thread 0 of a given warp (collections of 32 threads) must be 64-, 128- or 256-byte aligned for 4-, 8- and 16-byte operands, respectively, and 2) once your data is in shared memory, you may want to unroll your loop by 2x (for short) or 4x (for char) so adjacent threads reference adjacent 32-bit words, to avoid shared memory bank conflicts.