I've read that it's possible to optimize multiplication by a known constant at compile time by generating code which makes clever use of bit shifting and compiler-generated magic constants.
I'm interested in possibilities for optimizing exponentiation in a similar hacky manner. I know about exponentiation by squaring, so I guess you could aggressively optimize
pow(CONSTANT, n)
by embedding precomputed successive squares of CONSTANT into the executable. I'm not sure whether this is actually a good idea.
But when it comes to
pow(n, CONSTANT)
I can't think of anything. Is there a known way to do this efficiently? Do the minds of StackOverflow have ideas, on either problem?
Assuming pow(a,b)
is implemented as exp(b * log(a))
(which it probably is), if a
is a constant then you can precompute its log
. If b
is a constant, it only helps if it is also an integer.