Using arithmetic on a mix of uint64_t and long produces unwanted results on arm (c++ compiler). Same code works as intended on x86.
If long is replaced with uint64_t, it works as expected.
Armv7 compiler is c++ (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
Code here also: http://cpp.sh/2xrnu
int main()
{
uint64_t x = 1000UL * 60 * 60 * 24 * 31;
int i1 = 31;
long l2 = 1000 * 60 * 60 * 24;
uint64_t u2 = 1000 * 60 * 60 * 24;
std::cout << "x : " << x << std::endl;
std::cout << "i1 : " << i1 << std::endl;
std::cout << "l2 : " << l2 << std::endl;
std::cout << "u2 : " << u2 << std::endl;
std::cout << "x - i1*l2: " << x - i1 * l2 << std::endl; // expected '0', got 4294967296
std::cout << "x - i1*u2: " << x - i1 * u2 << std::endl; // expected and got '0'
return 0;
}
I expected the last two lines to give '0'.
On x86, the result is
i1 : 31
l2 : 86400000
u2 : 86400000
x - i1*l2: 0
x - i1*u2: 0
On Arm (CortexA8), the result is
i1 : 31
l2 : 86400000
u2 : 86400000
x - i1*l2: 4294967296
x - i1*u2: 0
In this line of code:
std::cout << "x - i1*l2: " << x - i1 * l2 << std::endl; // expected '0', got 4294967296
when you multiply 31 by 86400000 you get 2678400000 which is 0x9FA52400 and it cannot fit to signed long of 4 bytes (sign bit is set to 1). Then you get UB due to signed overflow and garbage value converted to uint64_t
to subtract it from x. On x86 you obviously have bigger long
hense you do not see the issue.