I tried to create a table 'MyTable' and add a field "field1" with small int type.
Then I add a negative number -12289 to the field1.
Then I execute the following SQL query:
select field1 ^ 0xcfff from MyTable
The result is zero.
But if I replace the hexidecimal integer constant with the decimal integer constant, as below:
select field1 ^ 53247 from MyTable
The result is -65536.
Why?
The only clue is on https://learn.microsoft.com/en-us/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql?view=sql-server-ver15
Integer constants greater than 2,147,483,647 are converted to the decimal data type, not the bigint data type.
But both 0xcfff and 53247 is much more smaller than 2,147,483,647. Why do they will produce difference results?
Update:
To my understanding, the keypoint of this question is that we can cast 0xcfff to small int, like this:
select cast(0xcfff as smallint)
But we CANNOT cast 53247 to small int, the following line will cause overflow:
select cast(53247 as smallint)
This is different from C/C++. In C/C++, both casts are OK.
The little known bit you are missing here is Data Type Precedence. Here is how you can check it:
declare @t table (
Id smallint not null
);
insert into @t (Id)
select -12289;
select sq.*,
sql_variant_property(sq.XBin, 'BaseType') as [BinType],
sql_variant_property(sq.XDec, 'BaseType') as [DecType]
from (
select t.Id,
t.Id ^ 0xcfff as [XBin],
t.Id ^ 53247 as [XDec]
from @t t
) sq;
The binary literal 0xcfff
takes 2 bytes, so it can be implicitly cast to the smallint
type, which the column itself has. The decimal literal, however, is interpreted as int
(not because it requires more than 2 bytes, but because SQL Server always interprets integer literals under 2^32-1 as having this data type, and everything greater interpreted as decimal
). This means that now the column has to be implicitly converted to int
which has higher precedence than smallint
, and its sign is preserved during conversion.