I've always wondered why leading zeroes (0
) are used to represent octal numbers, instead of — for example — 0o
. The use of 0o
would be just as helpful, but would not cause as many problems as leading 0
es (e.g. parseInt('08');
in JavaScript). What are the reason(s) behind this design choice?
All modern languages import this convention from C, which imported it from B, which imported it from BCPL.
Except BCPL used #1234
for octal and #x1234
for hexadecimal. B has departed from this convention because # was an unary operator in B (integer to floating point conversion), so #1234 could not be used, and # as a base indicator was replaced with 0.
The designers of B tried to make the syntax very compact. I guess this is the reason they did not use a two-character prefix.