Search code examples
cnumerical-methods

Goldberg's log1p vs. gsl_log1p


I am looking for a simple portable implementation of log1p. I have come across two implementations.

The first one appears as Theorem 4 here http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html,

An implementation of the above

double log1p(double p)
{
   volatile double y = p;
   return ( (1 + y) == 1 ) ? y : y * ( log( 1 + y) / ( ( 1 + y) - 1 ) );
}

The second one is in GSL http://fossies.org/dox/gsl-1.16/log1p_8c_source.html

double gsl_log1p (const double x)
{
  volatile double y, z;
  y = 1 + x;
  z = y - 1;
  return log(y) - (z-x)/y ; /* cancels errors with IEEE arithmetic */
}

Is there a reason to prefer one over the other?


Solution

  • I have tested these two approaches using a log() implementation with a maximum error of < 0.51 ulps, comparing to a multi-precision arithmetic library. Using that log() implementation as a building block for the two log1p() variants, I found the maximum error of Goldberg's version to be < 2.5 ulps, while the maximum error in the GSL variant was < 1.5 ulps. This indicates that the latter is significantly more accurate.

    In terms of special case handling, the Goldberg variant showed one mismatch, in that it returns a NaN for an input of +infinity, whereas the correct result is +infinity. There were three mismatches for special cases with the GSL implementation: Inputs of -1 and +infinity delivered a NaN, while the correct results should be -infinity and +infinity, respectively. Also, for an input of -0 this code returned +0, whereas the correct result is -0.

    It is difficult to assess performance without knowledge of the distribution of the inputs. As others have pointed out in comments, Goldberg's version is potentially faster when many arguments are close to zero, as it skips the expensive call to log() for such arguments.