I'm encountering a scipy function that seems to return a numpy array no matter what's passed to it. In my application I need to be able to pass scalars and lists only, so the only "problem" is that when I pass a scalar to the function an array with one element is returned (when I would expect a scalar). Should I ignore this behaviour, or hack up the function to ensure that when a scalar is passed a scalar is returned?
Example code:
#! /usr/bin/env python
import scipy
import scipy.optimize
from numpy import cos
# This a some function we want to compute the inverse of
def f(x):
y = x + 2*cos(x)
return y
# Given y, this returns x such that f(x)=y
def f_inverse(y):
# This will be zero if f(x)=y
def minimize_this(x):
return y-f(x)
# A guess for the solution is required
x_guess = y
x_optimized = scipy.optimize.fsolve(minimize_this, x_guess) # THE PROBLEM COMES FROM HERE
return x_optimized
# If I call f_inverse with a list, a numpy array is returned
print f_inverse([1.0, 2.0, 3.0])
print type( f_inverse([1.0, 2.0, 3.0]) )
# If I call f_inverse with a tuple, a numpy array is returned
print f_inverse((1.0, 2.0, 3.0))
print type( f_inverse((1.0, 2.0, 3.0)) )
# If I call f_inverse with a scalar, a numpy array is returned
print f_inverse(1.0)
print type( f_inverse(1.0) )
# This is the behaviour I expected (scalar passed, scalar returned).
# Adding [0] on the return value is a hackey solution (then thing would break if a list were actually passed).
print f_inverse(1.0)[0] # <- bad solution
print type( f_inverse(1.0)[0] )
On my system the output of this is:
[ 2.23872989 1.10914418 4.1187546 ]
<type 'numpy.ndarray'>
[ 2.23872989 1.10914418 4.1187546 ]
<type 'numpy.ndarray'>
[ 2.23872989]
<type 'numpy.ndarray'>
2.23872989209
<type 'numpy.float64'>
I'm using SciPy 0.10.1 and Python 2.7.3 provided by MacPorts.
SOLUTION
After reading the answers below I settled on the following solution. Replace the return line in f_inverse
with:
if(type(y).__module__ == np.__name__):
return x_optimized
else:
return type(y)(x_optimized)
Here return type(y)(x_optimized)
causes the return type to be the same as the type the function was called with. Unfortunately this does not work if y is a numpy type, so if(type(y).__module__ == np.__name__)
is used to detect numpy types using the idea presented here and exclude them from the type conversion.
The first line of the implementation in scipy.optimize.fsolve
is:
x0 = array(x0, ndmin=1)
This means that your scalar will be turned into a 1-element sequence, and your 1-element sequence will be essentially unchanged.
The fact that it seems to work is an implementation detail, and I would refactor your code to not allow sending a scalar into fsolve
. I know this might seem to go against duck-typing, but the function asks for an ndarray
for that argument, so you should respect the interface to be robust to changes in implementation. I don't, however, see any problem with conditionally using x_guess = array(y, ndmin=1)
for converting scalars into an ndarray
in your wrapper function and converting the result back to scalar when necessary.
Here is the relevant part of docstring of fsolve
function:
def fsolve(func, x0, args=(), fprime=None, full_output=0, col_deriv=0, xtol=1.49012e-8, maxfev=0, band=None, epsfcn=0.0, factor=100, diag=None): """ Find the roots of a function. Return the roots of the (non-linear) equations defined by ``func(x) = 0`` given a starting estimate. Parameters ---------- func : callable f(x, *args) A function that takes at least one (possibly vector) argument. x0 : ndarray The starting estimate for the roots of ``func(x) = 0``. ----SNIP---- Returns ------- x : ndarray The solution (or the result of the last iteration for an unsuccessful call). ----SNIP----