I have an algorithm where I currently use two unsigned integers as bitmaps to store information about the input; this limits the maximum input size to 64, so I'd like to create a version where the integers are replaced by a bitset or simple big integer. I started writing something using vector<bool>, but looking around on SO, I'm seeing a lot of answers telling me to avoid vector<bool>.
The operations I need:
When they are created, I know the maximum number of bits, but at first I'll need only 1 bit; then, at every step, one set is shifted left while the other will have a new highest bit added:
{
a <<= 1;
a[0] = x;
b[++msb] = y;
if (a < b) b = a;
}
If I create the bitsets with size 1, and then gradually expand them, maybe the comparisons will be quicker than if I immediately set the length to the maximum and have potentially thousands of leading zeros?
So should I continue using vector<bool> or use std::bitset (which unfortunately is fixed-size) or write a simple biginteger implementation capable of just the operations mentioned above using a vector of unsigned ints?
Using vector<bool> you can intialize the vectors with zero-length:
std::vector<bool> a(0), b(0);
and then perform the operations mentioned above like this:
{
a.push_back(x);
b.insert(b.begin(), y);
if (a < b) b = a;
}
The operations you describe (leaving out the implicit interpretation as an integer) are actually those provided efficiently by a deque. If you can tolerate the memory overhead, you could use std::deque<bool>
(std::list<bool>
would also work, but would have even higher overhead).
If the overhead is too much, you could start with
struct Bits {
std::deque<unsigned> deq;
int ms_free,ls_free; // unused bits in the two end words
};
and write methods to push bits on either end (for the right end, you would deq.push_back()
if lsb_free==0
and store into deq.back()
otherwise). Comparison would use deq.size()
and ms_free+ls_free
to know how to align the two sequences.