I learned that Clojure reader interprets decimal literal with suffix 'M', like 1.23M, as BigDecimal. And I also know that decimal numbers with no 'M' become Java double.
But I think it would be better that normal decimal number is BigDecimal, and host-dependent decimal has suffix, like 1.23H. So when the number is corrupted or truncated because of the precision limit of IEEE double, we can easily notice that the number is precision-limited. Also, I think easier expression should be host-independent.
Is there any reason that Clojure interprets literal decimal as Java double, other than time performance? Also, I don't think time performance is an answer, because it's not C/C++, and other way to declare host-dependent decimal can be implemented just like '1.23H'.
Once up on a time, for integers, Clojure would auto-promote to larger sizes when needed. This was changed so that overflow exceptions are thrown. My sense, from afar was that:
So it was decided that the default was to use normal integers (I think Java longs?) and only use arbitrarily large integers when the programmer called for it, when the programmer knowingly decided that they were willing to take the performance hit, and the inter-op hit.
My guess is similar decisions where made for numbers with decimal points.