Consider the following as a reference implementation:
/* calculates (a * b) / c */
uint32_t muldiv(uint32_t a, uint32_t b, uint32_t c)
{
uint64_t x = a;
x = x * b;
x = x / c;
return x;
}
I am interested in an implementation (in C or pseudocode) that does not require a 64-bit integer type.
I started sketching an implementation that outlines like this:
/* calculates (a * b) / c */
uint32_t muldiv(uint32_t a, uint32_t b, uint32_t c)
{
uint32_t d1, d2, d1d2;
d1 = (1 << 10);
d2 = (1 << 10);
d1d2 = (1 << 20); /* d1 * d2 */
return ((a / d1) * (b /d2)) / (c / d1d2);
}
But the difficulty is to pick values for d1 and d2 that manage to avoid the overflow ((a / d1) * (b / d2) <= UINT32_MAX) and minimize the error of the whole calculation.
Any thoughts?
I have adapted the algorithm posted by Paul for unsigned ints (by omitting the parts that are dealing with signs). The algorithm is basically Ancient Egyptian multiplication of a
with the fraction floor(b/c) + (b%c)/c
(with the slash denoting real division here).
uint32_t muldiv(uint32_t a, uint32_t b, uint32_t c)
{
uint32_t q = 0; // the quotient
uint32_t r = 0; // the remainder
uint32_t qn = b / c;
uint32_t rn = b % c;
while(a)
{
if (a & 1)
{
q += qn;
r += rn;
if (r >= c)
{
q++;
r -= c;
}
}
a >>= 1;
qn <<= 1;
rn <<= 1;
if (rn >= c)
{
qn++;
rn -= c;
}
}
return q;
}
This algorithm will yield the exact answer as long as it fits in 32 bits. You can optionally also return the remainder r
.