I'm rendering buddhabrot fractals and I'm looking for some optimisations/speedups and I was wondering if it could be worth while trying to do z = z^2 + c using bitwise operators. I've allready simplified it down a bit.
double zi2 = z.i*z.i;
double zr2 = z.r*z.r;
double zir = z.i*z.r;
while (iterations < MAX_BUDDHA_ITERATIONS && zi2 + zr2 < 4) {
z.i = c.i;
z.i += zir;
z.i += zir;
z.r = zr2 - zi2 + c.r;
zi2 = z.i*z.i;
zr2 = z.r*z.r;
zir = z.i*z.r;
iterations++;
}
z^2+c
can be encapsulated in the fused multiply-accumulate operation. This is available as single instruction on some processors and is becoming available on others. In processors where it is not available, it is usually optimized or optimizeable. For instance, C99 defines the fma family of functions to provide it. So I'd say that what you want is probably happening already and, if it's not, there's a very readable way to guarantee that it is.
In general, you should be highly suspicious any time your subconscious whispers that it would be faster to replace readable, maintainable code with a less-readable, less-maintainable, more difficult to debug solution X which you have just dreamed up. Readability and maintainability are extremely important not just for writing code well, but for sharing it and for talking about its correctness; computers are fast, compilers are pretty decent.