Search code examples
performancefloating-pointprologiso-prologdenormal-numbers

Proper handling of denormal floats in ISO-Prolog


Denormal floats are something special:

floating point number classification and representation

What does the ISO-Prolog standard say on how these should be handled?

It is clear to me that raising a evaluation_error(underflow) exception whenever these denormals occur is a proper way of dealing with them, but this incurs additional costs—each float produced must be checked.

But what about the "flush denormals to zero" (FTZ) and "treat denormals as zero" (DAZ) operating modes that many processors offer? Can Prolog implementations use these, and, if so, how do they do that properly?

Does (1) documenting the use of these operating modes, (2) ensuring that denormals are flushed to zero of the same sign (FTZ), and (3) ensuring that denormals are treated as zero of the same sign (DAZ) suffice? Help please!


Solution

  • Don't skip them. Yet, short answer from ISO/IEC 13211-1:1995 9.1.4.2 Floating point result function:

    It shall be implementation defined whether a processor
    chooses round(x) or underflow when 0 < |x| < fminN.

    But first, let's call them subnormals. The obsolete (at least according to LIA 1:2012) notion denormal was (in retrospect) not very helpful as it suggested some de-viant, de-structive properties. And no: they are not special as you suggest. To see this, consider the number line of real numbers. Numbers that can be represented exactly are marked and get closer and closer to each other when approaching zero (from both sides). Subnormal are those that are closest to zero. The distance between them and zero is the same as the distance between the smallest normal numbers. That's their anomaly (or denormaly so to speak). If you remove now those subnormals you get a gigantic gap that causes even more numerical anomalies. It's like you scratch away on a ruler the markings next to zero and then use this broken ruler for measuring1. So in absence of subnormals the remaining numbers are not normal as one might believe but rather abnormal, prone to even more errors.

    If you do not like to read Kahan on the subject which I nevertheless suggest, may I refer you to Gustafson's The end of error which explains subnormals much better than I do.

    In 13211-1 there is the possibility to exclude subnormals but this is just for compatibility with very RISCy, outdated architectures.

    So much for formal conformity. In the long term some Unum-style, CLP(BNR)-esque, Prolog IV-ish approach might be promising.


     1) That is, if you are rounding to zero. In case you produce exceptions/continuation values instead better numerical properties will hold as long as such exceptions do not occur.