In many websites (like:GeeksForGeeks: Difference between %d and %i format specifier in C language or TutorialsPoint: Difference between %d and %i format specifier in C) they say:
in scanf
;
%d
takes integer value as signed decimal integer and
%i
takes integer value as integer value with decimal, hexadecimal or octal type.
Using the word "signed" only for %d
and not for %i
I expected %i
to accept unsigned values only
but
%i
works perfectly fine with negative numbers.
So what is it about %i
that no one mentions the word "signed" for it?
Using the word "signed" only for %d and not for %i I expected %i to accept unsigned values only but %i works perfectly fine with negative numbers. So what is it about %i that no one mentions the word "signed" for it?
I wouldn't say "no one". The language standard says for %d
:
Matches an optionally signed decimal integer. [...] The corresponding argument shall be a pointer to signed integer.
and for %i
:
Matches an optionally signed integer. [...] The corresponding argument shall be a pointer to signed integer.
The language around signedness of the input representation and of the target of the corresponding pointer argument is identical for the two.
I can only speculate about why any web page author describes the meaning of %i
with less clarity and precision than you would like or than the standard does. Perhaps some are confused by the fact that %x
for hexadecimal format requires a pointer to an unsigned integer. Perhaps some are sloppy. Perhaps some lazily rely on the imprecise language of other web pages instead of on a primary source.