For the following code:
https://godbolt.org/z/WcGf9hEs3
#include <stdio.h>
int main() {
char temp_buffer[8];
double val = 25.3;
sprintf(temp_buffer, "%.*g", sizeof(temp_buffer), val);
printf("%s", temp_buffer);
}
I get the warnings in gcc 11.3 with -Wall
flag:
<source>:8:29: warning: field precision specifier '.*' expects argument of type 'int', but argument 3 has type 'long unsigned int' [-Wformat=]
8 | sprintf(temp_buffer, "%.*g", sizeof(temp_buffer), val);
| ~~^~ ~~~~~~~~~~~~~~~~~~~
| | |
| int long unsigned int
<source>:8:27: warning: '%.*g' directive writing between 1 and 310 bytes into a region of size 8 [-Wformat-overflow=]
8 | sprintf(temp_buffer, "%.*g", sizeof(temp_buffer), val);
| ^~~~
<source>:8:26: note: assuming directive output of 12 bytes
8 | sprintf(temp_buffer, "%.*g", sizeof(temp_buffer), val);
| ^~~~~~
<source>:8:5: note: 'sprintf' output between 2 and 311 bytes into a destination of size 8
8 | sprintf(temp_buffer, "%.*g", sizeof(temp_buffer), val);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In fact the size of the destination buffer is too small to store the value given the size argument, but what's with the warning 'sprintf' output between 2 and 311 bytes into a destination of size 8
? Where does that 311 bytes value come from?
If I cast the number of the decimal places to int
, i.e. (int)sizeof(temp_buffer)
the potential overflow numbers drop dramatically:
'sprintf' output between 2 and 16 bytes into a destination of size 8
There are multiple issues in the code:
sprintf
expects an int
value for the *
place holder and you pass a size_t
which may have a different size and representation.Passing sizeof(temp_buffer)
is an error, the compiler seems confused about the actual argument values and makes no particular assumption about the precision value or the number to convert. Yet they seem mistaken when they document the output can be 2 to 311 bytes:
25.3
, the exact representation of the closest IEEE 754 number is 25.300000000000000710542735760100185871124267578125
, requiring 52 bytes.printf("%.1000g", -0x1.fffffffffffffp+1023)
has 310 characters, thus requiring 311 bytes, which seems to be the reason for the 2 to 311 bytes
.%.*g
conversion can actually produce more than 311 bytes: printf("%.1000g", -5e-324)
produces 758 characters on both macOS and linux.When you cast sizeof(temp_buffer)
as (int)
, the compiler determines that the precision is 8
(a non trivial optimisation) and determines that the output can be as small as 2
bytes (a single digit and a null terminator) but no longer than 16 (-
, a digit, .
, 7 decimals, e
, -
and as many as 3 exponent digits plus a null terminator. That is still potentially too much for an 8 byte array.
Good job for warning the programmer about this potential undefined behavior!
Use snprintf()
, a larger array and pass (int)(sizeof(temp_buffer) - 9)
as the precision to get as many decimals as will fit in the worst case. It is difficult to produce as many decimals as will fit in all cases and may require multiple tries or complex postprocessing.