Search code examples
cudasumreduction

Calling sum reduction kernel from another kernel


I'm trying to sum reduce an array from a kernel without needing to send the data back to the CPU host, but I'm not getting the right results. Here is the sum kernel I use (slightly modified from the NVIDIA provided one):

template <class T, unsigned int blockSize, bool nIsPow2>
__device__ void
reduce(T *g_idata, T *g_odata, unsigned int n)
{
    __shared__ T sdata[blockSize];

    // perform first level of reduction,
    // reading from global memory, writing to shared memory
    unsigned int tid = threadIdx.x;
    unsigned int i = blockIdx.x*blockSize*2 + threadIdx.x;
    unsigned int gridSize = blockSize*2*gridDim.x;

    T mySum = 0;

    // we reduce multiple elements per thread.  The number is determined by the 
    // number of active thread blocks (via gridDim).  More blocks will result
    // in a larger gridSize and therefore fewer elements per thread
    while (i < n)
    {         
        mySum += g_idata[i];
        // ensure we don't read out of bounds -- this is optimized away for powerOf2 sized arrays
        if (nIsPow2 || i + blockSize < n) 
            mySum += g_idata[i+blockSize];  
        i += gridSize;
    } 

    // each thread puts its local sum into shared memory 
    sdata[tid] = mySum;
    __syncthreads();


    // do reduction in shared mem
    if (blockSize >= 512) { if (tid < 256) { sdata[tid] = mySum = mySum + sdata[tid + 256]; } __syncthreads(); }
    if (blockSize >= 256) { if (tid < 128) { sdata[tid] = mySum = mySum + sdata[tid + 128]; } __syncthreads(); }
    if (blockSize >= 128) { if (tid <  64) { sdata[tid] = mySum = mySum + sdata[tid +  64]; } __syncthreads(); }

#ifndef __DEVICE_EMULATION__
    if (tid < 32)
#endif
    {
        // now that we are using warp-synchronous programming (below)
        // we need to declare our shared memory volatile so that the compiler
        // doesn't reorder stores to it and induce incorrect behavior.
        volatile T* smem = sdata;
        if (blockSize >=  64) { smem[tid] = mySum = mySum + smem[tid + 32]; EMUSYNC; }
        if (blockSize >=  32) { smem[tid] = mySum = mySum + smem[tid + 16]; EMUSYNC; }
        if (blockSize >=  16) { smem[tid] = mySum = mySum + smem[tid +  8]; EMUSYNC; }
        if (blockSize >=   8) { smem[tid] = mySum = mySum + smem[tid +  4]; EMUSYNC; }
        if (blockSize >=   4) { smem[tid] = mySum = mySum + smem[tid +  2]; EMUSYNC; }
        if (blockSize >=   2) { smem[tid] = mySum = mySum + smem[tid +  1]; EMUSYNC; }
    }

    // write result for this block to global mem 
    if (tid == 0) 
        g_odata[blockIdx.x] = sdata[0];
}

template <unsigned int blockSize>
__global__ void compute(   int *values, int *temp, int *temp2, int* results, unsigned int N, unsigned int M )
{   
    int tdx = threadIdx.x;
    int idx = blockIdx.x * blockDim.x + tdx;

    int val = 0;
    int cpt = 0;

    if( idx < N )
    {
        for( int i = 0; i < M; ++i )
        {

            for( int j = i+1; j < M; ++j )
            {

                val = values[i*N+idx];
                __syncthreads();

                reduce<int, blockSize, false>( temp, temp2, N );
                __syncthreads();

                if( tdx == 0 )
                {

                    val = 0;

                    for( int k=0; k < gridDim.x; ++k )
                    {
                        val += temp2[k];
                        temp2[k] = 0;
                    }


                    results[cpt] = val;
                }

                __syncthreads();
                ++cpt;
            }
        }

    }
}

Am I missing something? Thanks!


Solution

  • Keep in mind that you cannot synchronise blocks in the grid. Block 1 might execute the reduce function and write a value to temp2[1], while Block2 might still be waiting and temp2[2] still contains some garbage.

    If you really want, you can enforce block synchronisation but it is hacky, cumbersome and not really efficient. Consider some alternatives:

    • You can assign one array to a single block to perform reduction; have different blocks perform independent reductions on independent arrays.
    • You can have the reduction as a separate kernel call (as in the original CUDA examples), but you may decide not to transfer the resulting data back to host. Instead, you launch another kernel which then process the output of the previous one. The content of global memory is preserved between kernel calls.