I have a project where I deal with large numbers (ns-timestamps) that don't fit in an integer. I therefore want to to use e.g. int64_t and am currently writing a test case (yes!).
To check the behaviour for large number, i started with something like
int64_t val = 2*std::numeric_limits<int>::max();
qDebug() << "long val" << val;
which returns
long val -2
(same as if i define val as int).
But if I write
int64_t val = std::numeric_limits<int>::max();
val *= 2;
qDebug() << "long val" << val;
I get
long val 4294967294
which looks correct.
So for me it looks as if the 2*max()
is first stored in an integer (truncated in this step) and then copied to the int64
. Why does this happen? The compiler knows that the result is of type int64
so that it the 2*max()
should fit directly.
So for me it looks as if the
2*max()
is first stored in an integer (truncated in this step) and then copied to theint64
This is absolutely correct. According to the language specification, when all parts of an expression fit in an int
, the computation is done in integers. In your case, both 2
and max()
do fit in an int
, so the multiplication is done in integers, causing an overflow.
The compiler knows that the result is of type
int64
so that it the2*max()
should fit directly.
The result to which an expression is assigned does not matter in this situation: the expression itself directs the way it is calculated. You can achieve the same result by casting the max()
to int64
:
int64_t val = 2*(int64_t)std::numeric_limits<int>::max();