Why is Float
stored as Real
in sys.columns
or Information_schema.columns
when precision <= 24
.
CREATE TABLE dummy
(
a FLOAT(24),
b FLOAT(25)
)
checking the data type
SELECT TABLE_NAME,
COLUMN_NAME,
DATA_TYPE,
NUMERIC_PRECISION
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'dummy'
Result:
+------------+-------------+-----------+-------------------+
| TABLE_NAME | COLUMN_NAME | DATA_TYPE | NUMERIC_PRECISION |
+------------+-------------+-----------+-------------------+
| dummy | a | real | 24 |
| dummy | b | float | 53 |
+------------+-------------+-----------+-------------------+
So why is float
stored as real
when the precision
is less than or equal to 24
. Is this documented somewhere ?
From an MSDN article which discusses the difference between float
and real
in T-SQL:
The ISO synonym for real is float(24).
float [ (n) ]
Where n is the number of bits that are used to store the mantissa of the float number in scientific notation and, therefore, dictates the precision and storage size. If n is specified, it must be a value between 1 and 53. The default value of n is 53.
n value | Precision | Storage size
1-24 | 7 digits | 4 bytes
24-53 | 15 digits | 8 bytes
SQL Server treats n as one of two possible values. If 1<=n<=24, n is treated as 24. If 25<=n<=53, n is treated as 53.
As to why SQL Server labels it as real
, I think it is just a synonym. However, underneath the hood it is still a float(24)
.