In SQL Server, I have decimal data to be stored in a table (which is never used for joins or filtering). This decimal data is variable - 80% of the time it has single digit values (1, 4, 5) and remaining 20% are with 16 digit decimals (0.8999999761581421, 3.0999999046325684).
I am wondering If I can save any storage space going with varchar
instead of float
, or if I should stick with float
since this is numeric data?
Always use the most appropriate datatype! Since this is clearly numerical data - use a numerical type. This will allow to e.g. sum the values, order by those values - those are numbers - so treat and store them as such!!
If you need to support fractions, you could use FLOAT
or REAL
, but those are notorious for rounding errors etc. Using DECIMAL(p,s)
avoids those pitfalls - it's stable, it's precise, not prone to rounding errors. So that would be my logical choice.
See the official MS docs for DECIMAL
for your details on how to define the p
(precision - total number of digits overall) and s
(scale - number of digits after the decimal point).
And btw: those are stored in fewer bytes that a varchar
column large enough to hold these values would be!