I was wondering which are the benefits of using truncation towards minus infinity (Haskell, Ruby) instead of truncation towards zero (C, PHP), from the perspective of programming languages/compilers implementation.
It seems that truncating towards minus infinity is the right way to go, but I haven’t found a reliable source for such claiming, nor how such decision impact the implementation of compilers. I’m particularly interested in possible compilers optimizations, but not exclusively.
Related sources:
These actually are not even the only choices and, in fact, maybe not even usually the best. I could summarize here, but it is perhaps better to just link to this excellent paper that contrasts truncate, floor, and Euclidean division, covering the theory and some real world applications, The Euclidean Definition of Functions div and mod, Raymond T. Boute.