I'm doing a lot of cubic spline interpolation using GSL. Say I have three independent variables a
, b
and c
, all tabulated at the same physical data points (it could be the same set of positions measured in meters, feet and miles), as well as two dependent variables y
and z
, tabulated at the same points. That is, the data for the functions y(a)
, y(b)
, y(c)
, z(a)
, z(b)
and z(c)
are tabulated. I now make 6 cubic splines for these functions, as here illustrated for the y(a)
spline:
gsl_interp_accel *acc = gsl_interp_accel_alloc();
gsl_spline *spline = gsl_spline_alloc(gsl_interp_cspline, size);
gsl_spline_init(spline, a, y, size);
where size
is the size of the a
and y
arrays (all six arrays have equal size).
My question: Do I really need a separate accelerator for each spline? Is it faster this way, and is it even safe to share an accelerator across multiple splines?
Yes, you need an accelerator per spline and it is anything but safe to you the same accelerator among multiple splines. As you already guessed it yourself I assume, the accelerator is a preconditioning that will in best case slow down the interpolations given mixed input.
If you are concerned with the performance aspect of creating accelerators and freeing them very often, just keep the accelerators and reset them after each use.
What is a big gain for performance, dependent on the size of your binary and other factors, which influence memory lookups, is the use of -DHAVE_INLINE=1
during compilation. It will inline gsl_interp_accel_find
from the header rather than use the compiled version in libgsl
.