I did not write this code, but I would like to use it and understand exactly what it is doing.
unsigned int j;
if (fread(&j,sizeof(unsigned int),1,stdin) != 1) {
if (feof(stdin)) {
fprintf(stderr,"# stdin_input_raw(): Error: EOF\n");
} else {
fprintf(stderr,"# stdin_input_raw(): Error: %s\n", strerror(errno));
}
exit(0);
}
printf("raw: %10u\n",j);
return j;
I do know that the code is reading unisgned integers from the stdin, but in my tests the output j
isnt the integer I wrote into the stdin by hand.
So I would like to know what the code is doing and how I may change it to return the correct input.
Ps: I am using Visual Studio C, not C++ on a Windows machine.
I don't have enough reputation points to reply to your comment on your question but the reason you get "875770417" instead of "825373492" when you type 1234 is the following:
875770417(dec) = 0x34333231
825373492(dec) = 0x31323334
If you look closely you will see that the bytes in the hex values are backwards (each two characters in the hex is a byte). This is because you are on a little endian machine. The bytes were written to stdin as '1' '2' '3' '4'. This is how a little endian machine would write 875770417 to memory, and when it reads the same sequence into a register it will get that integer, not the one you assumed was correct.
As a note little endian means you write the least-significant bytes to the lowest address memory locations so 0x1A2A3A4A
would be layed out as |4A|3A|2A|1A|
, where left is lowest address and right is the highest address. In your example you had |31|32|33|34
in memory so you read in 0x34333231