Given a generic integer type IntType
, it is easy to determine the necessary buffer type for a std::to_chars
operation for base-10 numbers:
std::array<char, std::numeric_limits<IntType>::digits10 + 1 + std::is_signed<IntType>::value> buf;
Since std::to_chars
doesn't NUL-terminate, and only adds the digits (and a possible preceding '-'
, if signed), this should work for all built-in integer types. The + 1
is needed because digits10 for integral types returns the floor of the base-10 logarithm, not the ceiling.
This leads to the question: what is the minimal buffer size for a floating point std::to_chars
call given a generic FloatType
to be converted without loss (writing all decimal digits), using each of the std::chars_format
values?
Note that the minimal buffer required is different depending on the floating point format desired. Using max_digits10
and max_exponent10
is always enough to determine the minimum number of characters necessary for base-10 output, assuming one doesn't want to output more precision than the floating point type contains.
This problem is not just limited to to_chars
, either. The C standard library functions in the printf
family will have the same behavior, so this applies with equal weight in C as it does in C++.
std::chars_format::scientific
or %e (printf specifier)
:
template<typename T>
constexpr int log10ceil(T num) {
return num < 10? 1: 1 + log10ceil(num / 10);
}
std::array<char, 4 +
std::numeric_limits<FloatType>::max_digits10 +
std::max(2, log10ceil(std::numeric_limits<FloatType>::max_exponent10))
> buf;
The function log10ceil
allows constexpr evaluation of how many digits are in the largest exponent possible. At least 2 digits must be present in the exponent per the standard, hence the test against a minimum exponent width. The precision used when writing must be no larger than max_digits10 - 1
. Using this exact precision will provide lossless conversion to a string representation.
The addition of 4 characters accommodates the possible sign, the decimal point, and the "e+"
or "e-"
in the output.
std::chars_format::fixed
or %f (printf specifier)
:
std::array<char, 2 +
std::numeric_limits<FloatType>::max_exponent10 +
std::numeric_limits<FloatType>::max_digits10
> buf;
Again, the precision used must be no larger than max_digits10 - 1
. Using this exact precision will provide lossless conversion to a string representation.
The addition of 2 characters accommodates the possible sign and the decimal point in the output.
std::chars_format::general
or %g (printf specifier)
:
For the general
case, the minimal buffer is always the same as the scientific
case. However, the precision used must be no larger than max_digits10
for lossless conversion to a string representation, rather than subtracting one as mentioned above.
Note that in all these examples, the buffer is exactly the size of the largest string representation. If a NUL-terminator or other content is needed, the size must be increased accordingly.