I am working on an application that requires monetary calculations, so we're using BigDecimal
to process such numbers.
I currently store the BigDecimals as a string in a PostgreSQL database. It made the most sense to me because I now am sure that the numbers will not lose precision as opposed to when they are stored as a double in the database.
The thing is that I cannot really do a lot of queries for that (i.e 'smaller than X' on a number stored as text is impossible)
For numbers I do have to perform complex queries on, I just create a new column value called indexedY (where Y is the name of the original column). I.e I have amount (string) and indexedAmount (double). I convert amount to indexedAmount by calling toDouble()
on the BigDecimal instance.
I now just do the query, and then when a table is found, I just convert the string version of the same number to a BigDecimal and perform the query once again (this time on the fetched object), just to make sure I didn't have any rounding errors while the double was in transit (from the application to DB and back to the application)
I was wondering if I can avoid this extra step of creating the indexedY columns.
So my question comes down to this: is it safe to just store the outcome of a BigDecimal as a double in a (PostgreSQL) table without losing precision?
If BigDecimal is required, I would use a NUMERIC type with as much precision as you need. Eg NUMERIC(20, 20)
However if you only needs 15 digits of precision, using a double in the database might be fine, in which case it should be fine in Java too.