Search code examples
matlabfor-loopwhile-loopiterationconvergence

Iterative solution for achieving convergence


I am trying to calculate n in equation: f = n^2 in iterative way so that f was close to 3 with precision 3 decimal places from bottom. For illustration I am writing here a code which does succesfully what I want. A problem with this code is that instead of f=n^2 I am starting external script and using this approach entire iteration becomes extremely slow and therefore can no be used. It requires too many cycles to converge to given precision. Therefore I am searching for faster way to achieve convergence. I mean with "faster' to converge with smaller number of cycles. One recommended to me binary search algorithm, could you help with creation of it with this little example f = n^2?

Here is my code which demonstrates what I want.

n = 1; % initial value
f=n^2; % function to calculate
tol = 3;   % tolerance
accuracy = 0.0000001; % accuracy

while f <= tol
    n = n+accuracy;
    f=n^2;
end
n = n-accuracy;
f=n^2;
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

Update: Here is my example for setting of function and its derivative for Newton-Raphson algorithm. Unfortunately, it is probably incorrect, so I would like to ask you for help.

n = 7.76; % initial guess
f_error = [7.73 9 12.404 7.659 8.66];
df_error = diff(f_error)
xNew = n - f_error./df_error

Solution

  • Newton-Raphson Method

    f_error(x) = x^(1/2) - N
    f'_error(x) = (1/2)*x^(-1/2)
    xNew = x - f_error(x)/f'_error(x)
    repeat:
    xNew = x - (x^(1/2) - N) / ((1/2)*x^(-1/2))
    

    approaches to targeted value quicker than trying to scan all floating-point representable values between them:

    n = 3; % initial guess
    nNew = n;
    val = 3; % value
    f=val^2; % function to calculate
    
    accuracy = 0.0001; % accuracy
    error = 10000;
    count = 0;
    
    while abs(error) > accuracy
        nNew=n-(n^(1/2) - val)/((1/2)*n^(-1/2));
        error = nNew-n
        n=nNew
        count = count +1
    end
    
    disp(['c = ' num2str(count,10)])
    disp(['n = ' num2str(n,10)])
    disp(['f = ' num2str(f,10)])
    

    output:

    c = 5 <--- only 5 iterations
    n = 9
    f = 9
    

    Your method fails to complete the calculation within time-limit of https://octave-online.net server:

    n = 1; % initial value
    f=n^2; % function to calculate
    tol = 3;   % tolerance
    accuracy = 0.0000001; % accuracy
    count = 0;
    
    while f <= tol
        n = n+accuracy;
        f=n^2;
        count = count+1;
    end
    n = n-accuracy;
    f=n^2;
    disp(['c = ' num2str(count,10)])
    disp(['n = ' num2str(n,10)])
    disp(['f = ' num2str(f,10)])
    

    output:

    !!! OUT OF TIME !!!
    c = 847309
    n = 1.0847309
    f = 1.176641126
    

    it moved only 0.1766 within 5 seconds so it needs at least 1-2 minutes to complete while Newton-Raphson method took only 5 iterations to complete. n=n+accuracy method also does not work for big n values because of rounding error. If next binary-representable n floating point value is bigger than n+accuracy then it gets stuck in infinite loop.


    The Newton Raphson method above uses square root. There is CPU instruction for sqrt for many x86 CPUs so it is highly probably accelerated but you can also find the square root by another Newton-Raphson method:

    f_error(x) = x^2 - N
    f'_error(x) = 2*x
    xNew = x - f_error(x)/f'_error(x)
    xNew = x - (1/2)*x - (1/2)*(N/x)
    

    but it still has a division for N/x. If CPU has no division then you can use another Newton Raphson for it:

    f_error(x) = 1/x - N
    f'_error(x) = -1/(x*x)
    xNew = x - f_error(x)/f'_error(x)
    xNew = x*(2-N*x)
    

    applying this for both other methods would result in a fully multiplication-based version and probably slower since you'd require cubic-power of iterations required so it could need hundreds of iterations in total.


    Edit: for unknown f function that is run on remote server, you can use simple two-point derivative. This version of derivative is the simplest of discrete derivatives and has higher error than 3-point, 5-point, etc type of derivatives:

    % remote server function that has unknown algorithm
    % so only numerical derivative will be used
    function result = blackBoxFunction(x)
        result = x^(1/2); 
    end
    
    % numerical (discrete two-point 1-dimensional) derivative
    % beware: the error will be h^2 so you should keep h small
    % add more points (with coefficients) for h^3 or better error
    function result = twoPointDerivative(x,y,h)
        result = (y-x)/(2*h); 
    end
    
    n = 3; % initial guess
    nNew = n;
    val = 3; % value
    f=val^2; % function to calculate
    
    accuracy = 0.0001; % accuracy
    error = 10000;
    count = 0;
    
    % Newton-Raphson with a blackbox function
    % f_error(n) = blackBoxFunction(n) - val
    % f'_error(n) = derivative_of_blackbox(n)
    % nNew = n - f_error(n)/f'_error(n)
    % repeat:
    % nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 
    
    while abs(error) > accuracy
        point1 = blackBoxFunction(n-accuracy)
        point2 = blackBoxFunction(n+accuracy)
        nNew=n-(blackBoxFunction(n)-val)/twoPointDerivative(point1,point2,accuracy);
        error = nNew-n
        n=nNew
        count = count +1
    end
    
    disp(['c = ' num2str(count,10)])
    disp(['n = ' num2str(n,10)])
    disp(['f = ' num2str(f,10)])
    

    output:

    c = 5
    n = 9
    f = 9
    

    If you need to do even less iterations than 5, you should have a better initial guess. Perhaps by a second-order polynomial approximation to the x^2.(and it is obviously c1 + c2x + c3x*x --> c1=0, c2=0, c3=1 for the x-square problem here but you can always get help from series expansions, genetic algorithms and other heuristics to find a good set of coefficients to minimize Newton-Raphson iterations for whole input range, not just 3).

    For example, you have cos(x) as a blackbox function and you want to reach acos(x) so you can use a polynomial initial guess effectively:

    % remote server function that has unknown algorithm
    % so only numerical derivative will be used
    function result = blackBoxFunction(x)
        result = cos(x); 
    end
    
    % numerical (discrete two-point 1-dimensional) derivative
    % beware: the error will be h^2 so you should keep h small
    % add more points (with coefficients) for h^3 or better error
    function result = twoPointDerivative(x,y,h)
        result = (y-x)/(2*h); 
    end
    
    val = 0.04; % value that is the input of this problem: what is acos(0.04) ??
    n = 1 - (val^2)/2 + (val^4)/24; % initial guess by polynomial
    nNew = n;
    f=acos(val); % function to calculate
    
    accuracy = 0.0001; % accuracy
    error = 10000;
    count = 0;
    
    % Newton-Raphson with a blackbox function
    % f_error(n) = blackBoxFunction(n) - val
    % f'_error(n) = derivative_of_blackbox(n)
    % nNew = n - f_error(n)/f'_error(n)
    % repeat:
    % nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 
    
    while abs(error) > accuracy
        point1 = blackBoxFunction(n-accuracy)
        point2 = blackBoxFunction(n+accuracy)
        nNew=n-(blackBoxFunction(n)-val)/twoPointDerivative(point1,point2,accuracy);
        error = nNew-n
        n=nNew
        count = count +1
    end
    
    disp(['c = ' num2str(count,10)])
    disp(['n = ' num2str(n,10)])
    disp(['f = ' num2str(f,10)])
    

    output:

    c = 3 <-- 2 less iterations !!
    n = 1.530785652
    f = 1.530785652
    

    With five-point derivative:

    % remote server function that has unknown algorithm
    % so only numerical derivative will be used
    function result = blackBoxFunction(x)
        result = cos(x); 
    end
    
    % numerical (discrete four-point 1-dimensional) derivative
    % beware: the error will be h^4 so you should keep h small
    % add more points (with coefficients) for h^5 or better error
    function result = fivePointDerivative(x,h,f)
        result = (f(x-2*h)-8*f(x-h)+8*f(x+h)-f(x+2*h))/(12*h); 
    end
    
    val = -1; % value that is the input of this problem: what is acos(-1) ?? (it is PI)
    n = 1 - (val^2)/2 + (val^4)/24; % initial guess by polynomial
    nNew = n;
    f=acos(val); % function to calculate
    
    accuracy = 0.00000000001; % accuracy
    error = 1;
    count = 0;
    
    % Newton-Raphson with a blackbox function
    % f_error(n) = blackBoxFunction(n) - val
    % f'_error(n) = derivative_of_blackbox(n)
    % nNew = n - f_error(n)/f'_error(n)
    % repeat:
    % nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 
    h_diff=0.0000001
    while abs(error) > accuracy
    
        nNew=n-(blackBoxFunction(n)-val)/fivePointDerivative(n,h_diff,@blackBoxFunction);
        error = nNew-n
        n=nNew
        count = count +1
    end
    
    disp(['c = ' num2str(count,15)])
    disp(['n = ' num2str(n,15)])
    disp(['f = ' num2str(f,15)])
    

    output:

    c = 29
    n = 3.14159265683019
    f = 3.14159265358979
                  ^ 3e-9 error
    

    Also, if the blackbox function is very expensive, then this simple derivative will be slow but you can still do some search for non-uniform discrete derivative methods to re-use the blackbox output from earlier iterations to minimize run-time.