C, C++, C#, Java, Rust, etc. have signed int
s by default. Most time you want unsigned variables, since cases where you have to represent something that can be below zero are less frequent than cases when you deal with natural numbers. Also unsigned variables don't have to be coded in 2's complement form and they have the most significant bit free for extra range of values.
Taking all this into account, why would creators of languages make ints signed by default?
I think your basic claim is false. Negative numbers are very common in real life. Think of temperatures, bank account balances, SO question ans answer scores... Modeling physical data in computing requires a natural way to express negative quantities.
Indeed the second example in The C Programming Language by Brian Kernighan and Dennis Ritchie is a program to convert temperatures between the Fahrenheit and the Celcius scales. It is their very first example of a numeric application of the C language.
Array sizes are positive numbers indeed, but pointer offsets may be negative in C.
Other languages such as Ada specify the range for numeric variables, but arithmetic computation still assumes continuity at 0
and negative numbers are implied by this.
Unsigned arithmetic, as specified in C is actually confusing: 1U - 2U
is greater than 0
, just like -1U
. Making this the default would be so counter-intuitive!