So I have within a program an ordinary for loop through a vector of objects (objects that are of a type I defined, if that is relevant):
for(int k = 0; k < objects.size(); k++){ ... }
...and when I compile, I get this warning:
warning: comparison between signed and unsigned integer expressions
This makes sense, since I think size()
for a vector returns a size_t
. But why would it matter? Isn't a certain number of elements (or even memory chunks) an integer that you can count? More importantly, since my program has multiple such loops and happens to segfault a lot, could this be part of it?
The problem arises when object.size()
returns a value that is greater than the maximum representable value of k
. Since k
is signed, it has only half the maximum value compared to a size_t
1.
Now, this may not happen in your particular application (on a typical 32-bit system, that would be upwards of two billion objects in your collection), but it's always a good idea to use the correct types.
1. Pre-emptive rebuttal: Yes, this is only true for machines using typical two's-complement arithmetic, and for machines where int
and size_t
are represented using the same number of bits.