I may very well have not the proper understanding of significant figures, but the book
C# 6.0 in a Nutshell by Joseph Albahari and Ben Albahari (O’Reilly).
Copyright 2016 Joseph Albahari and Ben Albahari, 978-1-491-92706-9.
provides the table below for comparing double
and decimal
:
Is it not counter-intuitive that, on the one hand, a double can hold a smaller quantity of significant figures, while on the other it can represent numbers way bigger than decimal, which can hold a higher quantity of significant figures ?
Imagine you were told you can store a value, but were given a limitation: You can only store 10 digits, 0-9 and a negative symbol. You can create the rules to decode the value, so you can store any value.
The first way you store things is simply as the value xxxxxxxxxx
, meaning the number 123 is stored as 0000000123
. Simple to store and read. This is how an int
works.
Now you decide you want to store fractional numbers, so you change the rules a bit. Now you store xxxxxxyyyy
, where x
is the integer portion and y
is the fractional portion. So, 123.98 would be stored as 0001239800
. This is roughly how a Decimal
value works. You can see the largest value I can store is 9999999999
, which translates to 999999.9999. This means I have a hard upper limit on the size of the value, but the number of the significant digits is large at 10.
There is a way to store larger values, and that's to store the x and y components for the formula in xxxxxxyyyy
. So, to store 123.98, you need to store 01239800-2
, which I can calculate as . This means I can store much bigger numbers by changing 'y', but the number of significant digits is basically fixed at 6. This is basically how a double
works.