I have measured data points, I want to fit against a formula to determine two entities. However I get the error :
TypeError: ufunc 'bitwise_xor' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
produced by the following python code (I use version 3):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def func(T, fun, Tc):
return fun*np.sqrt(np.cos(np.pi/2*(T/Tc)^2))
xdata=(4.61, 4.89, 4.92, 4.95, 5.06, 5.10, 5.21, 5.38, 5.41, 5.57, 5.80, 6.14, 6.61, 7.27, 7.66, 7.90, 8.91, 8.29, 8.23, 7.30, 7.86,
8.30, 8.89, 8.99, 9.24, 9.35, 9.50, 8.77, 8.27, 8.37, 7.72, 7.57, 7.99, 8.13) # these are temperature values <-> T
ydata=(2.85, 2.84, 2.83, 2.825, 2.82, 2.81, 2.80, 2.765, 2.76, 2.74, 2.695, 2.62, 2.50, 2.265, 2.105, 1.975, 1.23, 1.75, 1.81, 2.26,
2.005, 1.75, 1.31, 1.14, 1.015, 1.045, 1.06, 1.40, 1.75, 1.69, 2.075, 2.15, 1.93, 1.855) # these are energy values <-> func
popt, pcov = curve_fit(func, xdata, ydata)
popt #display these optimized values
Here comes the above error!!!
I saw a way to do it if you have a fixed formula and add some noise, but I have measured data points (they must be discrete).
Thank you! Carsten
I think a big part of the error you are seeing is that you have not supplied initial values for the fitting variables fun
and Tc
. Unfortunately scipy's curve_fit()
allows this and silently assigns the values to be 1.0, which is encourages bad practice and is a very bad "feature". Don't use it.
Allow me to recommend lmfit (https://lmfit.github.io/lmfit-py) which provides a higher level interface to curve fitting that is easier to use, better at avoiding bad behavior, and has lots of useful features unavailable with curve_fit()
.
With lmfit, your fit problem would look like (I changed a few variable names for clarity):
import numpy as np
import matplotlib.pyplot as plt
from lmfit import Model
def func(t, scale, tc):
return scale*np.sqrt(np.cos(np.pi/2*(t/tc)**2))
tdata = np.array([4.61, 4.89, 4.92, 4.95, 5.06, 5.10, 5.21, 5.38, 5.41, 5.57, 5.80, 6.14, 6.61, 7.27, 7.66, 7.90, 8.91, 8.29, 8.23, 7.30, 7.86,
8.30, 8.89, 8.99, 9.24, 9.35, 9.50, 8.77, 8.27, 8.37, 7.72, 7.57, 7.99, 8.13]) # these are temperature values <-> T
energy = np.array([2.85, 2.84, 2.83, 2.825, 2.82, 2.81, 2.80, 2.765, 2.76, 2.74, 2.695, 2.62, 2.50, 2.265, 2.105, 1.975, 1.23, 1.75, 1.81, 2.26,
2.005, 1.75, 1.31, 1.14, 1.015, 1.045, 1.06, 1.40, 1.75, 1.69, 2.075, 2.15, 1.93, 1.855]) # these are energy values <-> func
fmodel = Model(func)
params = fmodel.make_params(scale=5, tc=10)
result = fmodel.fit(energy, params, t=tdata)
print(result.fit_report())
plt.plot(tdata, energy, 'o', label='data')
plt.plot(tdata, result.best_fit, '+', label='fit')
plt.legend()
plt.show()
which will print out a report of
[[Model]]
Model(func)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 24
# data points = 34
# variables = 2
chi-square = 0.34988407
reduced chi-square = 0.01093388
Akaike info crit = -151.601474
Bayesian info crit = -148.548753
[[Variables]]
scale: 2.87776739 +/- 0.02737439 (0.95%) (init = 5)
tc: 9.68051725 +/- 0.03597889 (0.37%) (init = 10)
[[Correlations]] (unreported correlations are < 0.100)
C(scale, tc) = -0.506
and produce a graph like