Search code examples
sqlsql-servertypesdecimalfloating-accuracy

SQL Server: why is Float more accurate than Decimal


This post has the following code:

DECLARE @A DECIMAL(3, 0), @B DECIMAL(18, 0), @F FLOAT

SET @A = 3
SET @B = 3
SET @F = 3

SELECT 1 / @A * 3.0, 1 / @B * 3.0, 1 / @F * 3.0
SELECT 1 / @A * 3  , 1 / @B * 3  , 1 / @F * 3 

Using float, the expression evaluates to 1. Using Decimal, the expression evaluates to some collection of 9s after the decimal point. Why does float yield the more accurate answer in this case? I thought that Decimal is more accurate / exact per Difference between numeric, float and decimal in SQL Server and Use Float or Decimal for Accounting Application Dollar Amount?


Solution

  • The decimal values that you have declared are fixed width, and there are no points after the decimal place. This affects the calculations.

    SQL Server has a rather complex formula for how to calculate the precision of arithmetical expressions containing decimal numbers. The details are in the documentation. You also need to take into account that numeric constants are in decimal format, rather than numeric.

    Also, in the end, you need to convert back to a decimal format with the precision that you want. In that case, you might discover that float and decimal are equivalent.