Search code examples
pythonnumpymultidimensional-arraycudapycuda

Inplace transpose of 3D array in PyCuda


I have a 3D array and would like to transpose its first two dimensions (x & y), but not the 3rd (z). On a 3D array A I want the same result as numpy's A.transpose((1,0,2)). Specifically, I want to get the "transposed" global threadIdx. The code below is supposed to write the transposed index at the untransposed location in 3D array A. It doesn't.

Any advice?

import numpy as np
from pycuda import compiler, gpuarray
import pycuda.driver as cuda
import pycuda.autoinit

kernel_code = """
__global__ void test_indexTranspose(uint*A){
    const size_t size_x = 4;
    const size_t size_y = 4;
    const size_t size_z = 3;

    // Thread position in each dimension
    const size_t tx = blockDim.x * blockIdx.x + threadIdx.x;
    const size_t ty = blockDim.y * blockIdx.y + threadIdx.y;
    const size_t tz = blockDim.z * blockIdx.z + threadIdx.z;

    if(tx < size_x && ty < size_y && tz < size_z){
        // Flat index
        const size_t ti = tz * size_x * size_y + ty * size_x + tx;
        // Transposed flat index
        const size_t tiT = tz * size_x * size_y + tx * size_x + ty;
        A[ti] = tiT;
    }
}
"""

A = np.zeros((4,4,3),dtype=np.uint32)
mod = compiler.SourceModule(kernel_code)
test_indexTranspose = mod.get_function('test_indexTranspose')
A_gpu = gpuarray.to_gpu(A)
test_indexTranspose(A_gpu, block=(2, 2, 1), grid=(2,2,3))

This is what is returned (not what I expected):

A_gpu.get()[:,:,0]
array([[ 0, 12,  9,  6],
       [ 3, 15, 24, 21],
       [18, 30, 27, 36],
       [33, 45, 42, 39]], dtype=uint32)

A_gpu.get()[:,:,1]
array([[ 4,  1, 13, 10],
       [ 7, 16, 28, 25],
       [22, 19, 31, 40],
       [37, 34, 46, 43]], dtype=uint32)

A_gpu.get()[:,:,2]
array([[ 8,  5,  2, 14],
       [11, 20, 17, 29],
       [26, 23, 32, 44],
       [41, 38, 35, 47]], dtype=uint32)

This is what I expected (but was not returned):

A_gpu.get()[:,:,0]
array([[0, 4, 8,  12],
       [1, 5, 9,  13],
       [2, 6, 10, 14],
       [3, 7, 11, 15]], dtype=uint32)

A_gpu.get()[:,:,1]
array([[16, 20, 24, 28],
       [17, 21, 25, 29],
       [18, 22, 26, 30],
       [19, 23, 27, 31]], dtype=uint32)

A_gpu.get()[:,:,2]
...

Thanks,


Solution

  • Creating the numpy array with strides that are consistent with the CUDA kernel code solves the problem. Default layout of a numpy array is not row, column, depth as my kernel assumes. However, the strides can be set when creating the array.
    The above kernel works fine if the array is created like this:

    nRows = 4
    nCols = 4
    nSlices = 3
    nBytes = np.dtype(np.uint32).itemsize
    A = np.ndarray(shape=(nRows, nCols, nSlices), 
                   dtype=np.uint32, 
                   strides=(nCols*nBytes, 1*nBytes, nCols*nRows*nBytes))
    

    The strides are the jumps in memory consecutive indices need to take for each dimension in bytes. E.g. from the 1st element in row 1 to the 1st element in row 2 there are nCols * nBytes, i.e. 16 bytes.