Search code examples
machine-learningregressionnumerical-methodsnon-linear-regressionlevenberg-marquardt

How to improve Levenberg-Marquardt's method for polynomial curve fitting?


Some weeks ago I started coding the Levenberg-Marquardt algorithm from scratch in Matlab. I'm interested in the polynomial fitting of the data but I haven't been able to achieve the level of accuracy I would like. I'm using a fifth order polynomial after I tried other polynomials and it seemed to be the best option. The algorithm always converges to the same function minimization no matter what improvements I try to implement. So far, I have unsuccessfully added the following features:

  • Geodesic acceleration term as a second order correction
  • Delayed gratification for updating the damping parameter
  • Gain factor to get closer to the Gauss-Newton direction or the steepest descent direction depending on the iteration.
  • Central differences and forward differences for the finite difference method

I don't have experience in nonlinear least squares, so I don't know if there is a way to minimize the residual even more or if there isn't more room for improvement with this method. I attach below an image of the behavior of the polynomial for the last iterations. If I run the code for more iterations, the curve ends up not changing from iteration to iteration. As it is observed, there is a good fit from time = 0 to time = 12. But I'm not able to fix the behavior of the function from time = 12 to time = 20. Any help will be very appreciated.

data points and various fitted curves


Solution

  • Fitting a polynomial does not seem to be the best idea. Your data set looks like an exponential transient, with an horizontal asymptote. Forcing a polynomial to that will work very poorly.

    I'd rather try with a simple model, such as

    A (1 - e^(-at)).
    

    With naked eye, A ~ 15. You should have a look at the values of log(15 - y).