Creating hash with c# is different when same done in SQL server. why not same?
using (SHA1Managed sha1 = new SHA1Managed())
{
Guid g = new Guid("81FE483B-39ED-4052-8FFC-55A710907D9B");
var appGateId = g.ToString();
var hash = sha1.ComputeHash(Encoding.UTF8.GetBytes(appGateId));
var sb = new StringBuilder();
foreach (byte b in hash)
{
sb.Append(b.ToString("x2"));
}
Console.WriteLine(sb.ToString());
}
Here the result is ebb52fefab48f428b6ee03174276c8ade0b4ef1a
.
With SQL Server and this code:
declare @a uniqueidentifier = '81FE483B-39ED-4052-8FFC-55A710907D9B', @b nvarchar(50)
select @b = CONVERT(nvarchar(50), @a)
select convert(varchar(50), hashbytes('SHA1', @b), 2)
the result is CC34B1E702F6E7FE0EE92ED4D5390BB4765B9A21
@Marc Gravell's comment explains the problem here. In C# you are using a UTF-8 string, but in SQL Server you're using a nvarchar
, which is a UTF-16; these aren't comparable.
If you actually use a varchar
, however, and change the characters to lowercase (as a uniqueidentifier
is changed to uppercase when converted to a string based data type in T-SQL) you do get the same value. You could (if you're on 2019+) explicitly collate the value to a UTF-8 value too, but considering a GUID is made up of only hex characters (0-9A-F), this seems a little pointless:
DECLARE @a uniqueidentifier = '81FE483B-39ED-4052-8FFC-55A710907D9B', @b varchar(50);
SELECT @b = LOWER(CONVERT(varchar(50), @a));
SELECT @b, LOWER(CONVERT(varchar(50), HASHBYTES('SHA1', @b), 2));