Why are hexadecimal numbers prefixed as 0x
?
I understand the usage of the prefix but I don't understand the significance of why 0x
was chosen.
Short story: The 0
tells the parser it's dealing with a constant (and not an identifier/reserved word). Something is still needed to specify the number base: the x
is an arbitrary choice.
Long story: In the 60's, the prevalent programming number systems were decimal and octal — mainframes had 12, 18, 24 or 36 bits per byte, which is nicely divisible by 3 = log2(8).
The BCPL language used the syntax 8 1234
for octal numbers. When Ken Thompson created B from BCPL, he used the 0
prefix instead. This is great because
0
is the same in both bases),00005 == 05
), and#123
).When C was created from B, the need for hexadecimal numbers arose (the PDP-11 had 16-bit words and 8-bit bytes) and all of the points above were still valid. Since octals were still needed for other machines, 0x
was arbitrarily chosen (00
or 0h
was probably ruled out as awkward).
C# is a descendant of C, so it inherits the syntax.
You can find details about the history of C at Dennis M. Ritchie's page.