I was looking at some assembly code and came across the following (which I've converted for ease of the reader). All registers are 8 bits, and pointers are 16 bits. So (q)
loads 8 bits.
(q+1) = (q+1) = rr(q+1)
where (q)
dereferences q
and rr(q)
is rotate right
(q) = (q) + (q)/2 + bit((q+1), 0)
where bit((q+1), 0)
is getting the 0th bit of (q+1)
This really confused me, because what the above code does is multiply a 16 bit value by 1.5, regardless of its endianness (i.e. however you interpret q be it little endian or big endian, its value is multiplied by 1.5 in its respective endian).
I'm confused about how they're going about multiplying a 16 bit value by 1.5 using two 8 bit values. Whats going on here? Specifically, what is the purpose of adding the 0th bit of (q+1) to (q)
and the purpose of rotating (q+1)
to the right?
Here is the assembly code:
ld a, (q)
ld b, a
ld a, (q+1)
ld c, a
srl b
rr c
add c
ld (q+1), a
ld a, (q)
adc b
ld (q), a
ret
I didn't take the time to read through all of the assembly code in detail, but I strongly suspect @Ross Ridge is right.
This trick is called Horner's method. It's especially common in smaller embedded MCUs without multipliers, but can be used for general speed optimization. See