I'm working on a SQL Server database and need to make sure the numeric types are large enough for the data.
There are two sources: Java double
and Oracle float(126)
.
The SQL Server types are either numeric(30,10)
or numeric(15,8)
.
I read that Java doubles can store values from -4.9E-324 to 1.7976931348623157E+308, Oracle float(126)
has approximately 38 digits of precision, while SQL Server numeric(30,10)
has only 30 digits of precision and allows 10 digits after the decimal point.
Am I correct in saying that numeric(30,10)
is not safe for double
and float(126)
and could lead to loss of precision or overflow. Should these be changed to DOUBLE PRECISION
?
SQL's NUMERIC
is a decimal type. DOUBLE PRECISION
is a binary float type, typically mapped to IEEE 754 douple precision (64 bits), which is exactly the format used by Java's double
, so that should be a perfect match. FLOAT
is also a binary type present in the SQL standard, so it should also be present on SQL Server, but its maximum precision is implementation dependant, and if it's smaller on SQL Server there isn't really anything you can do.