Search code examples
artificial-intelligenceparticle-swarm

dynamic objective function in PSO algorithm


I'm working on a game-like project in artificial intelligence field; in which I'm using PSO algorithm for the enemy to find the position of player. Here is my question:

How can I optimize the PSO to find an objective that is not static and is constantly changing?

I want to implement some swarm intelligence algorithms in different levels of this game.


Solution

  • Prior to updating particle velocities (and thereafter, after restricting velocities, positions), usually in PSO, the particle best as well as swarm best positions are updated w.r.t. objective function value for each particle in the swarm in the current iteration. Lets denote these positions particleBestPos (for each particle) and swarmBestPos, respectively. Finally, for each particle, let particleCurrentPos denote the current position.

    For each particle, considering some static objective, described by f(...), to be minimized, the bed positions are updated according to:

    if f(particleCurrentPos) < f(particleBestPos)
        particleCurrentPos <-- particleBestPos
    if f(particleCurrentPos) < f(swarmBestPos)
        swarmBestPos <-- particleBestPos
    

    Now, there should be no problem to let f(...) be a "dynamic" function describing, say, the distance from argument (particleCurrentPos) to the player. The problem is rather that the particleBestPos as well as swarmBestPos positions has little value if the player is constantly moving.

    One solution to this would be keeping an history of each particles', say, n most recent positions. In the particle/swarm best updating step above, for each particle: roll through the particles' history (n latest position), and let particleBestPos be the best among those with regard to the players' current position at this time (and swarmBestPos similarly).

    After this step, particle velocities (and positions) are updated as usual, but with the reassurance that both the cognitive component (weight towards particleBestPos) and social component (weight toward swarmBestPos) are somewhat up to date w.r.t. the players' current position. The history of each particles' n latest positions will act as a memory for the particle:

    "From the viewpoint of my and my swarms' history of n positions: which historic position, of my own as well as my swarm, would be the most advantageous to drift toward with regard to the players' current position"

    The longer history memory you allow, the better (or more dangerous :) ) the swarm will be at tracking the moving player.


    Addition: Since your objective function (which evaluates and scores particles) is very trivial (current distance to player) and the optimal solution (position) is always known (x* = player position), perhaps PSO is a bit overkill in this case, and you would be fine simply to model your enemies as boids, see the Boids model.

    You could use the original voids model but add an additional (heavy weighted) alignment rule; to steer toward the player.

    Note that the Boids model is not and optimisation model, but simply a way of modelling swarm behaviour, as inspired by the flockin of birds. In your case, I believe it should suffice. The size of the weight on your additional alignment rule will also be a neat way to directly govern how well your enemies track the player.