printf("%f", 20);
results in the output 0.000000
, not 20.000000
. I'm guessing this has to do with how an int is represented in memory and how a double is represented in memory. Surprisingly for me, no matter how I alter 20
, for example by making the number larger, the output is still 0.000000
. Could someone please explain the underlying mechanics of this?
Most probably you are compiling your code on a platform/ABI where even for varargs functions data is passed into registers, and in particular different registers for integer/floating point values. x86_64 on Linux/OS X behaves like that.
The caller has an integer to pass, so it puts it into rsi
; on the other side, printf
expects a floating point value, so it tries to read it from xmm0
. No matter how you change your integer argument to any other value printf
will be unaffected - if will just print whatever happens to stay into xmm0
at the moment of the call.
You can actually check if this is the case by changing your call to:
printf("%f", 20, 123.45);
if it's working as I described, you should see 123.45 printed (the caller here puts 123.45 into xmm0
, as it is the first floating point parameter passed to the function; printf
behaves as before, but this time finds another value into xmm0
).