Consider the following program:
#include <stdio.h>
#include <stdint.h>
int main()
{
uint16_t result;
uint16_t ui = 1;
int16_t si = -1;
result = si * ui;
printf("%i", result);
return 0;
}
This prints the value 65535, which is what I expect after having read this post: si
is converted to ui
, so max+1 is added to it. Now, in the next code snippet, I change the type of result to uint_fast16_t
.
#include <stdio.h>
#include <stdint.h>
int main()
{
uint_fast16_t result;
uint16_t ui = 1;
int16_t si = -1;
result = si * ui;
printf("%li", result);
return 0;
}
Now, the result is -1
. What happens here? How can the result be signed?
Please see below code(As @Tom Karzes said that uint_fast16_t may be an ordinary unsigned int on some systems. In that case, %lu would be the wrong format.
And @bolov said that should use printf("%" PRIuFAST16 "\n", result);
and printf("%" PRIdFAST16 "\n", (int_fast16_t) result);
So I changed my answer):
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main()
{
uint_fast16_t result;
uint16_t ui = 1;
int16_t si = -1;
result = si * ui;
printf("sizeof(result) is %zu\n", sizeof(result));
printf("%" PRIuFAST16 "\n", result);
printf("%" PRIdFAST16 "\n", (int_fast16_t) result);
return 0;
}
Run it will output(on 64-bit computer):
sizeof(result) is 8
18446744073709551615
-1
Why output is different for the same result
? One is %PRIuFAST16
, other is %PRIdFAST16
.
It's because it depend on how you see it, as unsigned
or signed
.