I am trying to convert a float to an IEEE-754 Hex representation. The following code works on my Mac.
#include <stdio.h>
#include <stdlib.h>
union Data {
int i;
float f;
};
int main() {
float var = 502.7;
union Data value;
value.f = var;
printf("%08X\n", value.i);
return 0;
}
This is giving me the expected result of 43FB599A.
When I run this code on an ATmega64a I am getting 0000599A not 04A2599A as originally posted which was a mistake.
The first two bytes are not expected but the final two bytes seem correct?
Any ideas?
As mentioned in the accepted answer the I was assuming that int
was 4 bytes. I was writing the code on my mac and sending it to someone that was downloading it to an 8-bit ATmega64a. On the ATmega64a int
is 2 bytes, not 4. I changed int
to unsigned long
which is 4 bytes on the ATmega64a.
In addition, I had to add a length sub-specifier of l
to the format given to printf. This is because when given a specifier of x
, printf uses a type of unsigned int
to interpret the corresponding argument. Adding the length sub-specifier of l
tells printf to use the type of unsigned long
to interpret the corresponding argument.
Using only the length sub-specifier of l
and not changing the variable i
to unsigned long
was causing printf to grab some extra bytes and output 04A2599A
as originally posted. I, of course, needed to change the type of i
to unsigned long
as well as use the length sub-specifier of l
.
This processor is an 8 bit one which mean size of int
is most likely 2 byte not 4 as your code assume.
Try to use uint32_t
rather then int
if you can.