I was trying to get a variation of the CUDA matrix transpose sample for all kind of sizes. Briefly, I have to take an input array (double *a
) and write it on two different parts (you will notice the different offsets) of a bigger matrix (double *tab
). I'm storing the data in row-major format so I'm using this macro for indexing:
#define IDX2L(i,j,ld) (((i)*ld))+(j)) // 0 based index +row-major format
This is the simple code I use.
__global__ void cuda_a_Coalesced(double *tab, int tab_rows, int a_rows, double *a)
{
__shared__ double tile[16*(16+1)];
int col = threadIdx.x + blockIdx.x * blockDim.x;
int row = threadIdx.y + blockIdx.y * blockDim.y;
int col_2, row_2;
int a_cols=tab_rows-a_rows; // tab_rows-a_rows is the number of columns of a
int tab_cols=2*tab_rows+2; // 2*tab_rows+2 is the number of columns of tab
if( (col<a_cols) && (row<a_rows) )
{
// Load the data into shared mem
tile[threadIdx.x+threadIdx.y*(16+1)]=a[IDX2L(row,col,a_cols)];
// Normal copy (+ offsets)
tab[IDX2L(row,col+tab_rows+a_rows,tab_cols)]= tile[threadIdx.x+threadIdx.y*(16+1)];
// New idx
col_2 = blockIdx.y * blockDim.y + threadIdx.x;
row_2 = blockIdx.x * blockDim.x + threadIdx.y;
}
__syncthreads();
if( (row_2<a_cols) && (col_2<a_rows) )
// Transpose (+ other offsets)
tab[IDX2L(row_2+a_rows,col_2+tab_rows,tab_cols)]= -tile[threadIdx.y+threadIdx.x*(16+1)];
}
The launching parameters are the followings:
b1=(int)ceil((float)a_cols/16);
b2=(int)ceil((float)a_rows/16);
dim bck(b1,b2):dim th(16,16);
cuda_a_Coalesced<<<bck,th>>>(tab,tab_rows,a_rows,a);
Normal copy is always well performed regardless of the size. Transpose copy only works for that sizes that are integer multiple of the block size (as in the CUDA sample). When transpose copy fails, some parts of the operations are right and others not, on a way that I can not exactly predict or track. Note as the idea is to change the index in the shared memory so that the transpose can be written in a coalesced form in the output matrix (due to row major-format).
Someone could tell me the reason why the code only works with that kind of sizes?
Is there any trick to solve this situation?
The problem was due to some undefined threads because the value for col_2
and row_2
was being assigned within an if()
statement that no all threads were visiting.
To solve this situation we can give the value for col_2
and row_2
when we declare these variables and delete the homonimous compute that had place within the mentioned if()
:
__shared__ double tile[16*(16+1)];
int col = threadIdx.x + blockIdx.x * blockDim.x;
int row = threadIdx.y + blockIdx.y * blockDim.y;
int col_2 = blockIdx.y * blockDim.y + threadIdx.x;
int row_2 = blockIdx.x * blockDim.x + threadIdx.y;
int a_cols=tab_rows-a_rows;
int tab_cols=2*tab_rows+2;
Thus, the rest of the code looks like this:
if( (col<a_cols) && (row<a_rows) )
{
// Load the data into shared mem
tile[threadIdx.x+threadIdx.y*(16+1)]=a[IDX2L(row,col,a_cols)];
// Normal copy (+ offsets)
tab[IDX2L(row,col+tab_rows+a_rows,tab_cols)]= tile[threadIdx.x+threadIdx.y*(16+1)];
}
__syncthreads();
if( (row_2<a_cols) && (col_2<a_rows) )
// Transpose (+ other offsets)
tab[IDX2L(row_2+a_rows,col_2+tab_rows,tab_cols)]= -tile[threadIdx.y+threadIdx.x*(16+1)];