Search code examples
pythonloopsoptimizationconvergence

How to optimize a algorithm that uses loops to find a stable value for a variable


I have a case where a variable (a, in this case) is calculated at each loop iteration and stops where the increment of value between two iterations is small enough.

I would like to know of a general way to find the value for that variable in this kind of case, without having to do that "convergence" work using loops.

There I would like to know if the solution is to put everything in equations, or if some tools exist to tackle that.

a = 10
b = 10

diff = 1

while diff > .1:
    old_a = a
    a += b
    diff = 1 - (old_a/a)
    print(diff)

The present code produces:

0.5
0.33333333333333337
0.25
0.19999999999999996
0.16666666666666663
0.1428571428571429
0.125
0.11111111111111116
0.09999999999999998

Therefore, it takes 9 iterations to find a relative difference of the value of a between two iterations inferior to 10%.


Solution

  • You have

    a_n = a_0 + n * b

    and try to find where

    1 - (a_(n-1) / a_n) 
    = 1 - (a_0 + (n--1)*b) / ( a_0 + n * b) 
    = 1 - (a_0 + n*b -b) / (a_0 + n*b) 
    = 1 - 1 + b / (a_0 + n*b)
    = b / (a_0 + n * b)
    < 0.1
    

    That is the same as

    (a_0 / b) + n * b / b
    = (a_0 / b) + n
    > 10
    

    (because 0.1 = 1 / 10 and 1/x > 1/y <=> y > x if x,y != 0)

    Since you metion in the comments that your actual problem is more complex: If finding a closed form solution like above is not feasible, look at this wikipedia page about fixed point iteration, which is exactly the kind of problem you try to solve.