I was writing a program in which Scipy CubicSpline routine is used at certain points, because of the use of Scipy routine I cannot use Numba @jit on my whole program.
I recently came across the @overload feature and I was wondering if it could be used in this way,
from numba.extending import overload
from numba import jit
from scipy.interpolate import CubicSpline
import numpy as np
x = np.arange(10)
y = np.sin(x)
xs = np.arange(-0.5, 9.6, 0.1)
def Spline_interp(xs,x,y):
cs = CubicSpline(x, y)
ds = cs(xs)
return ds
@overload(Spline_interp)
def jit_Spline_interp(xs,x,y):
ds = Spline_interp(xs,x,y)
def jit_Spline_interp_impl(xs,x, y):
return ds
return jit_Spline_interp_impl
@jit(nopython=True)
def main():
# other codes compatible with @njit
ds = Spline_interp(xs,x,y)
# other codes compatible with @njit
return ds
print(main())
kindly correct me if my understanding of the @overload feature is wrong and what is the possible solution for using such Scipy libraries with Numba.
Especially for more complex functions, reimplementing everything in numba-compileable Python code can be quite a lot of work, and sometimes slower. The following answer will be on calling C-like functions directly from a shared object or dynamic library.
Compiling the fortran routines
This example will show a way to do this on windows, but it should be straight forward on other os. For a portable interface defining an ISO_C_BINDING is highly recommendable. In this answer I will try it without an interface.
dll.def
EXPORTS
SPLEV @1
Compilation
ifort /dll dll.def splev.f fpbspl.f /O3 /fast
Calling this function directly from Numba
Wrapper
The following code shows two ways on how to call this functions. In Numba it isn't directly possible to pass scalar by reference. You can either allocate an array on the heap (slow for small functions), or use an intrinsic to use stack arrays.
import numba as nb
import numpy as np
import ctypes
lib = ctypes.cdll.LoadLibrary("splev.dll")
dble_p=ctypes.POINTER(ctypes.c_double)
int_p =ctypes.POINTER(ctypes.c_longlong)
SPLEV=lib.SPLEV
SPLEV.restype = ctypes.c_void_p
SPLEV.argtypes = (dble_p,int_p,dble_p,int_p,dble_p,dble_p,int_p,int_p,int_p)
from numba import types
from numba.extending import intrinsic
from numba.core import cgutils
@intrinsic
def val_to_ptr(typingctx, data):
def impl(context, builder, signature, args):
ptr = cgutils.alloca_once_value(builder,args[0])
return ptr
sig = types.CPointer(nb.typeof(data).instance_type)(nb.typeof(data).instance_type)
return sig, impl
@intrinsic
def ptr_to_val(typingctx, data):
def impl(context, builder, signature, args):
val = builder.load(args[0])
return val
sig = data.dtype(types.CPointer(data.dtype))
return sig, impl
#with intrinsics, temporary arrays are allocated on stack
#faster but much more relevant for functions with very low runtime
@nb.njit()
def splev_wrapped(x, coeff,e):
#There are just pointers passed to the fortran function.
#The arrays have to be contiguous!
t=np.ascontiguousarray(coeff[0])
x=np.ascontiguousarray(x)
c=coeff[1]
k=coeff[2]
y=np.empty(x.shape[0],dtype=np.float64)
n_arr=val_to_ptr(nb.int64(t.shape[0]))
k_arr=val_to_ptr(nb.int64(k))
m_arr=val_to_ptr(nb.int64(x.shape[0]))
e_arr=val_to_ptr(nb.int64(e))
ier_arr=val_to_ptr(nb.int64(0))
SPLEV(t.ctypes,n_arr,c.ctypes,k_arr,x.ctypes,
y.ctypes,m_arr,e_arr,ier_arr)
return y, ptr_to_val(ier_arr)
#without using intrinsics
@nb.njit()
def splev_wrapped_2(x, coeff,e):
#There are just pointers passed to the fortran function.
#The arrays have to be contiguous!
t=np.ascontiguousarray(coeff[0])
x=np.ascontiguousarray(x)
c=coeff[1]
k=coeff[2]
y=np.empty(x.shape[0],dtype=np.float64)
n_arr = np.empty(1, dtype=np.int64)
k_arr = np.empty(1, dtype=np.int64)
m_arr = np.empty(1, dtype=np.int64)
e_arr = np.empty(1, dtype=np.int64)
ier_arr = np.zeros(1, dtype=np.int64)
n_arr[0]=t.shape[0]
k_arr[0]=k
m_arr[0]=x.shape[0]
e_arr[0]=e
SPLEV(t.ctypes,n_arr.ctypes,c.ctypes,k_arr.ctypes,x.ctypes,
y.ctypes,m_arr.ctypes,e_arr.ctypes,ier_arr.ctypes)
return y, ier_arr[0]