We all know that the logical &&
operator short circuits if the left operand is false
, because we know that if one operand is false
, then the result is also false
.
Why doesn't the bitwise &
operator also short-circuit? If the left operand is 0
, then we know that the result is also 0
. Every language I've tested this in (C, Javascript, C#) evaluates both operands instead of stopping after the first.
Is there any reason why it would be a bad idea the let the &
operator short-circuit? If not, why don't most languages make it short-cicuit? It seems like an obvious optimization.
I'd guess it's because a bitwise and
in the source language typically gets translated fairly directly to a bitwise and
instruction to be executed by the processor. That, in turn, is implemented as a group of the proper number of and
gates in the hardware.
I don't see this as optimizing much of anything in most cases. Evaluating the second operand will normally cost less than testing to see whether you should evaluate it.