Why does The following code work totally different on IA-32 and x86-64?
#include <stdio.h>
int main() {
double a = 10;
printf("a = %d\n", a);
return 0;
}
On IA-32, the result is always 0. However, on x86-64 the result can be anything between MAX_INT and MIN_INT. What is the reason behind this?
%d
actually is used for printing int
. Historically the d
stood for "decimal", to contrast with o
for octal and x
for hexadecimal.
For printing double
you should use %e
, %f
or %g
.
Using the wrong format specifier causes undefined behaviour which means anything may happen, including unexpected results.