Search code examples
pythoncudaparallel-processinggpgpupycuda

Pycuda Vector arithmetic - Id inside Kernel


I'm trying to work out a simple program with pycuda to test it and latter compare it to my opencl implementation. Yet, I'm having trouble adding 2 1D arrays. The problem is that I can't seem to find the correct ID of each element.

My code is very simple:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy as np

#Host variables
a = np.array([[1.0, 2,0 , 3.0]], dtype=np.float32)
b = np.array([[4.0, 5,0 , 6.0]], dtype=np.float32)
k = np.float32(2.0)

#Device Variables
a_d = cuda.mem_alloc(a.nbytes)
b_d = cuda.mem_alloc(b.nbytes)
cuda.memcpy_htod(a_d, a)
cuda.memcpy_htod(b_d, b)
s_d = cuda.mem_alloc(a.nbytes)
m_d = cuda.mem_alloc(a.nbytes)

#Device Source
mod = SourceModule("""
    __global__ void S(float *s, float *a, float *b)
    {
        int bx = blockIdx.x;
        int by = blockIdx.y;
        int tx = threadIdx.x;
        int ty = threadIdx.y;
        int row = by * blockDim.y + ty;
        int col = bx * blockDim.x + tx;
        int dim = gridDim.x * blockDim.x;
        const int id = row * dim + col;
        s[id] = a[id] + b[id];
    }

    __global__ void M(float *m, float *a, float k)
    {
        int bx = blockIdx.x;
        int by = blockIdx.y;
        int tx = threadIdx.x;
        int ty = threadIdx.y;
        int row = by * blockDim.y + ty;
        int col = bx * blockDim.x + tx;
        int dim = gridDim.x * blockDim.x;
        const int id = row * dim + col;
        m[id] = k * a[id];
    }
""")

#Vector addition
func = mod.get_function("S")
func(s_d, a_d, b_d, block=(1,3,1))
s = np.empty_like(a)
cuda.memcpy_dtoh(s, s_d)

#Vector multiplication by constant
func = mod.get_function("M")
func(m_d, a_d, k, block=(1,3,1))
m = np.empty_like(a)
cuda.memcpy_dtoh(m, m_d)

print "Vector Addition"
print "Expected: " + str(a+b)
print "Result: " + str(s) + "\n"
print "Vector Multiplication"
print "Expected: " + str(k*a)
print "Result: " + str(m)

My output is:

Vector Addition
Expected: [[ 5.  7.  0.  9.]]
Result: [[ 5.  7.  0.  6.]]

Vector Multiplication
Expected: [[ 2.  4.  0.  6.]]
Result: [[ 2.  4.  0.  6.]]

I don't really understand how this Index thing works in CUDA. I've found some documentation online that gave me some insight on how the grids, blocks and threads work, but still, I can't get it to work right. I must be missing something. Every piece of information is dearly appreciated.


Solution

  • Your indexing seems fine, even if it's a bit overloaded for this small example (it would be enough to consider one dimension).

    The problem is, that your arrays a and b have 4 elements each. But your kernel functions only operate on the first 3 elements. Therefore is the result of the 4th element not as expected.

    Did you mean the following?

    a = np.array([[1.0, 2.0, 3.0]], dtype=np.float32)
    b = np.array([[4.0, 5.0, 6.0]], dtype=np.float32)