C++11 introduces std::to_string
, so I took a look at one implementation: it calls vsnprintf
internally. Okay, but why does it always set the size parameter as 4 times the size of the type?
inline string
to_string(int __val)
{ return __gnu_cxx::__to_xstring<string>(&std::vsnprintf, 4 * sizeof(int),
"%d", __val); }
inline string
to_string(unsigned __val)
{ return __gnu_cxx::__to_xstring<string>(&std::vsnprintf,
4 * sizeof(unsigned),
"%u", __val); }
inline string
to_string(long __val)
{ return __gnu_cxx::__to_xstring<string>(&std::vsnprintf, 4 * sizeof(long),
"%ld", __val); }
The maximal number of binary digits of a N-decimal value is the ceil value of (N * log(10) / log(2)). A single decimal digit needs ceil(3.32) binary digits, That is 4.
For sizes of 8 bits it is:
Decimals = ceil(8 * Size / 3.32) = ceil(2.41 * Size).
For the sign (overhead and allocation) you get:
Decimals = 4 * Size.
Note: A conversion with snprintf of a single signed char needs 5 bytes (including the sign and the terminating zero). For values with a size greater than one byte, Decimals = 4 * Size
provides a result, which is big enough.