I'd like to understand better why choose int
over unsigned
?
Personally, I've never liked signed values unless there is a valid reason for them. e.g. count of items in an array, or length of a string, or size of memory block, etc., so often these things cannot possibly be negative. Such a value has no possible meaning. Why prefer int
when it is misleading in all such cases?
I ask this because both Bjarne Stroustrup and Chandler Carruth gave the advice to prefer int
over unsigned
here (approx 12:30').
I can see the argument for using int
over short
or long
- int
is the "most natural" data width for the target machine architecture.
But signed over unsigned has always annoyed me. Are signed values genuinely faster on typical modern CPU architectures? What makes them better?
Let me paraphrase the video, as the experts said it succinctly.
Andrei Alexandrescu:
- No simple guideline.
- In systems programming, we need integers of different sizes and signedness.
- Many conversions and arcane rules govern arithmetic (like for
auto
), so we need to be careful.Chandler Carruth:
- Here's some simple guidelines:
- Use signed integers unless you need two's complement arithmetic or a bit pattern
- Use the smallest integer that will suffice.
- Otherwise, use
int
if you think you could count the items, and a 64-bit integer if it's even more than you would want to count.- Stop worrying and use tools to tell you when you need a different type or size.
Bjarne Stroustrup:
- Use
int
until you have a reason not to.- Use unsigned only for bit patterns.
- Never mix signed and unsigned
Wariness about signedness rules aside, my one-sentence take away from the experts:
Use the appropriate type, and when you don't know, use an
int
until you do know.