I was stuck when I was trying to use a for
loop to solve a problem.
Here's my simplified code:
int main(int argc, const char * argv[])
{
std::vector<int> a;
a.push_back(2333);
int n = 10, m = 10;
for(int i=0; i< -1; i++)
m--;
std::cout<<m<<endl;
for(int j=0; j<a.size()-2; j++)
n--;
std::cout<<n<<endl;
return 0;
}
Apparently, a.size() = 1
so these two end conditions should be the same. However, when I ran my code on Xcode 9.4.1 I got unexpected as it turned out that m = 10
and n = 11
. And I found that the time it took to get the value of n
is much longer than m
.
Why would I get such a result? Any help will be appreciated.
The value returned by size()
is std::size_t
, which is an unsigned integral type. This means that it can only represent non-negative numbers, and if you do an operation that results in a negative number, it will wrap around to the largest possible value like in modular arithmetic.
Here, 2 - 1
is -1, which wraps to 2^32 - 1
on a 32-bit system. When you try to subtract 2^32 - 1
from 10, you cause a signed integer underflow since the minimum value of a 32-bit integer is -2^31
. Signed integer overflow/underflow is undefined behavior, so anything can happen.
In this case, it seems like the underflow wrapped around to the maximum value. So the result would be 10 - (2^32 - 1) + 2^32
, which is 11. We add 2^32
to simulate the underflow wrapping around. In other words, after the 2^31 + 10
th iteration of the loop, n
is the minimum possible value in a 32-bit integer. The next iteration causes the wrap around, so n
is now 2^31 - 1
. Then, the remaining 2^31 - 12
iterations decrease n
to 11.
Again, signed integer overflow/underflow is undefined behavior, so don't be surprised when something weird happens because of that, especially with modern compiler optimizations. For example, your entire program can be "optimized" to do absolutely nothing since it will always invoke UB. You're not even guaranteed to see the output from std::cout<<m<<endl;
, even though the UB is invoked after that line executes.