I read somewhere that the deciding committee of Java decided that a setPrecision
method on BigDecimal
would be dumb, since it already has setScale
.
What is even dumber is to have API users have methods like this in their code:
private fun getBigDecimalWith16DigitPrecision(value: BigDecimal): BigDecimal {
return when {
value < BigDecimal.ONE -> value.setScale(16, RoundingMode.HALF_UP)
value < BigDecimal(10) -> value.setScale(15, RoundingMode.HALF_UP)
value < BigDecimal(100) -> value.setScale(14, RoundingMode.HALF_UP)
value < BigDecimal(1000) -> value.setScale(13, RoundingMode.HALF_UP)
value < BigDecimal(10000) -> value.setScale(12, RoundingMode.HALF_UP)
else -> value.setScale(11, RoundingMode.HALF_UP)
}
}
Which doesn't even cover all cases.
It's just one line, since your scale needs to change by exactly as much as precision does:
fun BigDecimal.setPrecision(newPrecision: Int) = setScale(scale() + (newPrecision - precision()), RoundingMode.HALF_UP)
Or alternatively (more obvious, but less efficient):
fun BigDecimal.setPrecision(newPrecision: Int) = BigDecimal(toPlainString(), MathContext(newPrecision, RoundingMode.HALF_UP))