Search code examples
c++calgorithmmathnumerical-methods

Implementing the derivative in C/C++


How is the derivative of a f(x) typically calculated programmatically to ensure maximum accuracy?

I am implementing the Newton-Raphson method, and it requires taking of the derivative of a function.


Solution

  • I agree with @erikkallen that (f(x + h) - f(x - h)) / (2 * h) is the usual approach for numerically approximating derivatives. However, getting the right step size h is a little subtle.

    The approximation error in (f(x + h) - f(x - h)) / (2 * h) decreases as h gets smaller, which says you should take h as small as possible. But as h gets smaller, the error from floating point subtraction increases since the numerator requires subtracting nearly equal numbers. If h is too small, you can loose a lot of precision in the subtraction. So in practice you have to pick a not-too-small value of h that minimizes the combination of approximation error and numerical error.

    As a rule of thumb, you can try h = SQRT(DBL_EPSILON) where DBL_EPSILON is the smallest double precision number e such that 1 + e != 1 in machine precision. DBL_EPSILON is about 10^-15 so you could use h = 10^-7 or 10^-8.

    For more details, see these notes on picking the step size for differential equations.

    Further more, if you want very-very accurate results, you could use the Eigen library. Try the following code first untouched, then uncomment the first line with the Eigen include:

    // #include "Eigen/Dense" // uncomment this, and check the result
    
    #include <functional>
    #include <iostream>
    #include <limits>
    #include <cmath>
    
    typedef std::function<double(double)> RealFunc;
    typedef std::function<double(std::function<double(double)>, double)> RealFuncDerivative;
    
    double FibonacciFunc(double x) {
        return pow(x, 3) + 2.0 * pow(x, 2) + 10.0 * x - 20.0;
    }
    
    double derivative(RealFunc f, double x) {
        double h = sqrt(std::numeric_limits<double>::epsilon());
        return (f(x + h) - f(x - h)) / (2.0 * h);
    }
    
    double NewtonsMethod(RealFunc f, RealFuncDerivative d, double x0, double precision) {
        double x = x0;
        for (size_t i = 0;; i++) {
            x = x - (f(x) / d(f, x));
    
            if (abs(f(x)) < precision) {
                return x;
            }
        }
    }
    
    int main() {
        RealFunc f{FibonacciFunc};
        RealFuncDerivative d{derivative};
    
        std::cout << NewtonsMethod(f, d, 1.0, 10e-4) << "\n";
    }
    

    The result of the first run should be 1.41176, and the result with the Eigen library should be 1.36881 (both calculated with 2 iterations). This second result is exactly equal to the analytical ground truth result using 5 decimal places.

    The difference in the results you observe is due to Eigen affecting the floating-point precision settings and rounding modes in your code, to optimize numerical stability and performance for its own operations. This can lead to much better results, especially when you have sensitive numerical algorithms like the Newton-Raphson method.

    Note that with Eigen, even with a big h (like h=10 instead of h=SQRT(DBL_EPSILON)), the algorithm still converges pritty well, although quite slowly (1.36877 under 47 iterations). So all in all, it is not necessary, but still preferred to use as small h as possible.