I am trying to unit test some methods producing BigDecimal outputs but I am quite confused with the varying precissio:
assertEquals(BigDecimal.valueOf(20), result);
I recently switched from crating BigDecimal values using constructor (new BigDecimal(value)
to using valueOf(value)
) and my tests are complaining:
Expected :20
Actual :20.00
Setting BigDecimal.valueOf(20.00)
is not helping so my question is, what is the correct way to test these floating point BigDecimal instances? Most of my test cases will have zeros after the floating point.
The problem is that BigDecimal.equals
follows this rule:
Compares this
BigDecimal
with the specifiedObject
for equality. UnlikecompareTo
, this method considers twoBigDecimal
objects equal only if they are equal in value and scale (thus 2.0 is not equal to 2.00 when compared by this method).
And 20
and 20.00
don't have the same scale.
You need to use either
new BigDecimal("20.00")
or
BigDecimal.valueOf(20).setScale(2)
or, if you like more esoteric options
BigDecimal.valueof(2000, 2)
The problem with BigDecimal.valueOf(20.00)
is that following the rules of BigDecimal.valueOf(double)
, this results in a BigDecimal of 20.0
(that is, scale 1), and - slightly different - new BigDecimal(20.00)
will result in a BigDecimal
of 20
(scale 0).