I have implemented an algorithm that is able to fit multiple data sets at the same time. It is based on this solution: multi fit
The target function is too complex to show here (LaFortune scatter model), so I will use the target function from the solution for explanation:
def lor_func(x,c,par):
a,b,d=par
return a/((x-c)**2+b**2)
How can I punish the fitting algorithm if it chooses a parameter set par
that results in lor_func < 0
.
A negative value for the target function is valid from a mathematical point of view. So the parameter set par
resulting in this negative target function might be the solution with the least error. But I want to exlude such solutions as they are nor physically valid.
A function like:
def lor_func(x,c,par):
a,b,d=par
value = a/((x-c)**2+b**
return max(0, value)
does not work as the fit returns wrong data as it optimizes the 0-values too. The result will then be different from the correct one.
use the bounds
argument of scipy.optimize.least_squares?
res = least_squares(func, x_guess, args=(Gd, K),
bounds=([0.0, -100, 0, 0],
[1.0, 0.0, 10, 1]),
max_nfev=100000, verbose=1)
like I did here: Suggestions for fitting noisy exponentials with scipy curve_fit?