Search code examples
c#sql-serversqlclr

c# SqlDecimal flipping sign in multiplication of small numbers


It seems I'm running into more woes with 'my most favorite datatype' SqlDecimal. I'm wondering if this should be considered a bug or not.

When I multiply two small numbers in SQL I get the expected result. When I run the same numbers through a SQLCLR function the results are, well surprising.

c# code:

using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

namespace TestMultiplySQLDecimal
{
    public static class Multiplier
    {
        [SqlFunction(DataAccess=DataAccessKind.None, IsDeterministic = true,IsPrecise = true)]
        public static SqlDecimal Multiply(SqlDecimal a, SqlDecimal b)
        {
            if (a.IsNull || b.IsNull) return SqlDecimal.Null;
            return a*b;
        }
    }
}

SQL Code:

USE tempdb
GO
IF DB_ID('test') IS NOT NULL DROP DATABASE test
GO
CREATE DATABASE test
GO
USE test
GO

CREATE ASSEMBLY TestMultiplySQLDecimal 
FROM 'C:\Users\tralalalaa\Documents\visual studio 2015\Projects\TestMultiplySQLDecimal\TestMultiplySQLDecimal\bin\Release\TestMultiplySQLDecimal.dll'
WITH PERMISSION_SET = SAFE
GO

CREATE FUNCTION dbo.fn_multiply(@a decimal(38,8), @b decimal(18,8)) 
RETURNS decimal(38,8)
EXTERNAL NAME TestMultiplySQLDecimal.[TestMultiplySQLDecimal.Multiplier].Multiply
GO
DECLARE @a decimal(38, 8),
        @b decimal(18, 8),
        @c decimal(38, 8),
        @f decimal(38, 8)

SELECT @a = -0.00000450,
       @b = 0.193,
       @c = NULL,
       @f = NULL

SELECT @c = @a * @b,
       @f = dbo.fn_multiply(@a, @b)

SELECT multiply = null, c = @c, f = @f

The outcome of this is : c = -0.00000100 f = +0.00000100

I know the 'absolute' difference is 'minimal' and I've "played down" bigger errors blaming it on "rounding differences"... But it's going to be hard to explain to clients that negative times positive results in positive. And after all, T-SQL supports it fine...

I can try to work around it by using decimal(28,8) instead of decimal(38,8) but I'll run into other (totally unrelated) issues then =/


The following console application exhibits the same problem, without having to get SQL Server/SQLCLR involved:

using System;
using System.Data.SqlTypes;

namespace PlayAreaCSCon
{
    class Program
    {
        static void Main(string[] args)
        {
            var dec1 = new SqlDecimal(-0.00000450d);
            var dec2 = new SqlDecimal(0.193d);
            dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 8);
            dec2 = SqlDecimal.ConvertToPrecScale(dec2, 18, 8);
            Console.WriteLine(dec1 * dec2);
            Console.ReadLine();
        }
    }
}

Prints 0.000001


Solution

  • I believe the bug is located around about line 1550 of SqlDecimal:

    ret = new SqlDecimal(rgulRes, (byte)culRes, (byte)ResPrec, 
                         (byte)ActualScale, fResPositive);
    
    if (ret.FZero ())
         ret.SetPositive();
    
    ret.AssertValid();
    
    ret.AdjustScale(lScaleAdjust, true);
    
    return ret;
    

    It first constructs a new decimal using the final scale parameter. It next checks whether the result is "zero", based on the passed in constructor parameters.

    Then, after asserting everything is valid, it performs a scale adjustment.

    At the time that the FZero check is performed, the result is something like -0.0000008685. And we know that the final scale will be 6 because we're at the limits on resulting scale and precision. Well, the first 6 digits are all zero.

    It's only after that, when the scale is being adjusted, that it takes rounding into consideration and moves a 1 into the final decimal position.

    It's a bug. Unfortunately, the source code for the SQL Server native implementation of decimal isn't publicly available, so we cannot compare it to the managed implementation of SqlDecimal to see how similar they are and how the original avoids the same bug.