Search code examples
pythonoptimizationconvex-optimizationparticle-swarm

Can PSO converge at a point with non-zero derivative?


I am using this library - https://pythonhosted.org/pyswarm/ to find the global minima of a convex function. This is just to get started and work towards a non-convex function. I found the global minima using linear regression but the problem is that PSO seems to converge at different points depending on the values of omega and phi(s) that I set. I can confirm that these points aren't the global minima by comparing the cost with that of the minima given by linear regression.
Is this possible in PSO that it converges (value doesn't change after 10 iterations) or am I making some mistake somewhere?


Solution

  • It is absolutely possible for PSO to converge in the wrong place. The thing about metaheuristics is that they can take a lot of time to run. Ten iterations in the wrong place is eminently possible. Furthermore, converging to the absolute global minimum is going to take a very long time, and the algorithm will never be able to prove that it's converged to a global minimum, only reach a termination criterion. Your expectations on a metaheuristic should be that it eventually gives you a good answer, not that it always converges to the global minimum.

    In compensation for these drawbacks -- long run time, no guarantee of global minimization -- you get an optimization algorithm that can handle any kind of function evaluation or fitness landscape.