I am trying to write a function which takes in three bit vectors representing the digits in use in the row, col and block of a Sudoku puzzle from positions 1-9. A cell can only use digits that are unused, and the function is supposed to return whether the digits in all the vectors force one possibility or whether there is more than one possibility. I took this to mean that I would have to merge all three vectors, and then determine where there were "unset" bits in the resulting pattern.
However, my function does not seem in gdb to be returning the correct mask even though it was inspired by this derivation: http://graphics.stanford.edu/~seander/bithacks.html#MaskedMerge
I am trying to merge one set among two, then the third set into the previous merge, derive the number of 1s in the final merge, and subtract it to derive how many 0s there are.
Then, I wrote the following function:
bool possibilities(unsigned short row, unsigned short col, unsigned short bloc)
{
unsigned int mask = (1 << col) - 1;
unsigned int inbw = (row & ~mask) | (col & mask);
unsigned int mask2 = (1 << bloc) - 1;
unsigned int final = (inbw & ~mask2) | (bloc & mask2);
int num_1s;
while (result != 0) {
result &= (result - 1);
num_1s++;
}
int zeroes = 32 - num_1s; // 32 being the presumed number of bits in a short.
if (zeroes == 1) return true;
return false;
}
According to this document:
http://www.cplusplus.com/doc/tutorial/variables/
A short is not smaller than char. At least 16 bits. So you could be wrong calculating the zeroes as 32 - num_1s.
Instead of doing so, you can get an unsigned short and fill it with 1s, setting 0s at first 9 bits.
var = 0xFFFFFE00
By that way you avoid that the solution depends strongly on the size of the variable you use.
A solution to that problem could be this (assuming row, col and bloc like above):
possibilities = row | col | bloc;
while (possibilities != 0) {
num_0s += ((~possibilities)&1);
possibilities = (possibilities >> 1);
}