Search code examples
cudagpunvidiareduction

Reduce3 example in CUDA SDK


I'm reading the reduction optimization in CUDA SDK, and I have problem following what happens from reduce2 to reduce3:

/*
    This version uses sequential addressing -- no divergence or bank conflicts.
*/
template <class T>
__global__ void
reduce2(T *g_idata, T *g_odata, unsigned int n)
{
    T *sdata = SharedMemory<T>();

    // load shared mem
    unsigned int tid = threadIdx.x;
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;

    sdata[tid] = (i < n) ? g_idata[i] : 0;

    __syncthreads();

    // do reduction in shared mem
    for (unsigned int s=blockDim.x/2; s>0; s>>=1)
    {
        if (tid < s)
        {
            sdata[tid] += sdata[tid + s];
        }

        __syncthreads();
    }

    // write result for this block to global mem
    if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}

/*
This version uses n/2 threads --
it performs the first level of reduction when reading from global memory.
*/
template <class T>
__global__ void
reduce3(T *g_idata, T *g_odata, unsigned int n)
{
    T *sdata = SharedMemory<T>();

    // perform first level of reduction,
    // reading from global memory, writing to shared memory
    unsigned int tid = threadIdx.x;
    unsigned int i = blockIdx.x*(blockDim.x*2) + threadIdx.x;

    T mySum = (i < n) ? g_idata[i] : 0;

    if (i + blockDim.x < n)
        mySum += g_idata[i+blockDim.x];

    sdata[tid] = mySum;
    __syncthreads();

    // do reduction in shared mem
    for (unsigned int s=blockDim.x/2; s>0; s>>=1)
    {
        if (tid < s)
        {
            sdata[tid] = mySum = mySum + sdata[tid + s];
        }

        __syncthreads();
    }

    // write result for this block to global mem
    if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}

I have problem visualizing what the first level of reduction in reduce3 is trying to do, or why the number of threads has been reduced by half. Can anyone give me some pointers?


Solution

  • The only difference between the two is that reduce3 performs summation prior to the shared memory reduction. So where reduce2 only loads a single value from global memory and stores it in shared memory:

    // load shared mem
    unsigned int tid = threadIdx.x;
    unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
    
    sdata[tid] = (i < n) ? g_idata[i] : 0;
    

    reduce3 loads two values, adds them and then stores the result in shared memory:

    unsigned int tid = threadIdx.x;
    unsigned int i = blockIdx.x*(blockDim.x*2) + threadIdx.x;
    
    T mySum = (i < n) ? g_idata[i] : 0;
    
    if (i + blockDim.x < n)
        mySum += g_idata[i+blockDim.x];
    
    sdata[tid] = mySum;
    __syncthreads();
    

    Because the first level of the standard "power of two" reduction is done by each thread prior to the shared memory reduction, the total number of threads required is half that of reduce2. You should also note that half of the threads used in reduce2 are effectively wasted -- they only load data into shared memory and do not participate in the reduction at all. Therefore, reduce3 removes them and uses fewer threads to perform the same operation.

    If you keep going through the versions of the code you will see this idea extended to its logical conclusion, where each thread loads and sums many values before storing the result to shared memory and performing the reduction. There are efficiency gains in a memory bandwidth limited operation like reduction through using fewer threads, allowing much of the per-thread setup overhead to be amortised over many more input values, and reducing contention for memory controller resources.