I am using BigDecimal for division. I would like the quotient to be rounded to the correct number of significant figures.
For example
@Test
public void testBigDDivision() {
BigDecimal top = new BigDecimal("0.25");
BigDecimal bottom = new BigDecimal("105");
int topSigFig = significantDigits(top);
int botSigFig = significantDigits(bottom);
// evaluates to 2 in this example
int scale = (topSigFig > botSigFig) ? botSigFig : topSigFig;
BigDecimal quot = top.divide(bottom, scale, RoundingMode.HALF_UP);
BigDecimal expected = new BigDecimal("0.0024");
Assert.assertTrue(String.format("Got %s; Expected %s", quot, expected),
expected.compareTo(quot) == 0); // fails "Got 0.00; Expected 0.0024"
}
// stolen from https://stackoverflow.com/a/21443880
public static int significantDigits(BigDecimal input) {
input = input.stripTrailingZeros();
return input.scale() < 0
? input.precision() - input.scale()
: input.precision();
}
What is the correct way to programmatically determine the scale to ensure the quotient has the correct number of significant figures?
Significant figures are situational, not computable. As you mentioned in the comment, you're doing a program to recalculate the percentage of a solution with several ingredients. I suggest you transform the ingredients' units until you have no significant digits to the right of the decimal point in the input, then do the calculation.
For this you need to know the unit in the input. So, if the test input is in "grams", you'd first transform to milligrams (grams*1000).
So numbers would be 250 and 105000; then do the division and keep 2 or 3 decimal digits - less than that doesn't usually make sense when the input has no decimal numbers.