What is a scalable algorithm to print an N-binary-digit integer manually whose value does not fit in long long
. I know printf
and friends, along with <iostream>
(which most likely piggy-backs on <cstdio>
have this builtin for standard types, but I'd like to do it for an integer composed of N bytes.
I have thought about this and googled a bit, but it always comes down to using a pre-existing bigint libirary like GMP (a codebase I am not at all familiar with) or "use printf" or the most helpful "this is difficult".
The integer is basically:
template<size_t N>
class Integer{
...
private:
int8_t first;
uint8_t rest[N-1];
}
so reinterpreting an Integer<4>
's bytes would get you an int32_t
. I'd like to scale this to N>8. Efficiency is not really my concern at the moment. Neither is endianness (this is for x86).
Step 1: Define a lookup table containing powers of two in string format:
const char * const powers_of_two[] = {"1", "2", "4", "8", "16", "32", "64", ...};
Step 2: Write a function that adds two numbers in string format.
Step 3: Iterate through the bits in your number and add all the strings corresponding to the 1 bits.
Step 4: Print the result.
I used this approach myself for printing very large floating point numbers, and it worked fine for me.