BINARY_DOUBLE in Oracle and Double in Java use IEEE 754 standard.
But there is the difference in their accuracy.
For example value:
456.67d
Oracle:
declare
a BINARY_DOUBLE := 456.67d;
begin
SYS.DBMS_OUTPUT.PUT_Line(TO_CHAR(a,'9.99999999999999999999EEEE'));
end;
Result: 4.56670000000000020000E+02
Java:
Double d = 456.67d;
DecimalFormat formatter = new DecimalFormat("0.000000000000000000000E000");
System.out.println(formatter.format(d));
Result: 4.566700000000000000000E002
The value in Java is not as accurate as in Oracle.
Online converters said that the most accurate representation for 456.67d is:
4.56670000000000015916157281026E2
So why in Java accuracy is not the same like in Oracle? And how I can get in Java more accurate value?
Use BigDecimal:
Double d = 456.67d;
BigDecimal bd = new BigDecimal( d );
bd.setScale( 50 );
System.out.println( bd );
DecimalFormat formatter = new DecimalFormat("0.000000000000000000000E000");
System.out.println( formatter.format( bd ) );
Output:
456.67000000000001591615728102624416351318359375
4.566700000000000159162E002