Search code examples
c#typesdecimalprimitive

Behind the scenes, what's happening with decimal value type in C#/.NET?


How is the decimal type implemented?

Update

  • It's a 128-bit value type (16 bytes)
  • 1 sign bit
  • 96 bits (12 bytes) for the mantissa
  • 8 bits for the exponent
  • remaining bits (23 of them!) set to 0

Thanks! I'm gonna stick with using a 64-bit long with my own implied scale.


Solution

  • Decimal Floating Point article on Wikipedia with specific link to this article about System.Decimal.

    A decimal is stored in 128 bits, even though only 102 are strictly necessary. It is convenient to consider the decimal as three 32-bit integers representing the mantissa, and then one integer representing the sign and exponent. The top bit of the last integer is the sign bit (in the normal way, with the bit being set (1) for negative numbers) and bits 16-23 (the low bits of the high 16-bit word) contain the exponent. The other bits must all be clear (0). This representation is the one given by decimal.GetBits(decimal) which returns an array of 4 ints.