I'm implementing algorithm D of section 4.3.2 of volume 2 of The Art of Computer Programming by D. E. Knuth.
On step D3 I'm supposed to compute q = floor(u[j+n]*BASE+u[j+n-1] / v[n-1])
and r = u[j+n]*BASE+u[j+n-1] mod v[n-1]
. Here, u
(dividend) and v
(divisor) are single-precision* arrays of length m+n
and n
, respectively. BASE
is the representation base, which for a binary computer of 32 or 64 bits equals to 2^32 or 2^64, respectively.
My question is about the precision in which q
and r
are represented. As I understand the rest of the algorithm, they are supposed to be single-precision*, but its easy to spot many cases where they must be double-precision* to fit the result.
How are those values supposed to be computed? In what precision?
*
The expression single/double-precision refers to integer arithmetic, not to floating-point arithmetic.
When divisor is normalized (most significant bit set), quotient always will fit in a single word. With a power of two base representation, normalization is accomplished by cheap left shift operations.