I would like to compile parallel.cu
and python_wrapper.cpp
where python_wrapper.cpp
use Boost.python
to expose the method in parallel.cu
to python.
I'm new to both cuda
and Boost.python
.
From their manual and google, I couldn't find how to make them talk.
Some site says, that I should do something like
nvcc -o parallel.cu
g++ -o python_wrapper.cpp
g++ parallel.o python_wrapper.o
But the only way I know to compile a Boost.python
code is to use bjam
.
There have been attempt to integrate nvcc
into bjam
, but I couldn't make them work.
parallel.cuh
__global__ void parallel_work();
int do_parallel_work();
python_wrapper.cpp
#include <boost/python/module.hpp>
#include <boost/python/def.hpp>
#include "parallel.cuh"
BOOST_PYTHON_MODULE(parallel_ext){
using namespace boost::python;
def("parallel", do_parallel_work);
}
How can I compile these files?
I have heard of PyCuda
, but I need to include Boost
and thrust
library in my .cu
files.
Also, if possible, I would like to stick to a standard command line driven compilation process.
Create a static or dynamic library with the CUDA functions and link it in. That is, use nvcc to create the library and then, in a separate step, use g++ to create the Python module and link in the library.