Is there any actual difference in performance? Is it faster? (lets say I use it in at least 100 cases in the same program, would it improve my program in terms of speed?)
This question might be more appropriate on Software Engineering Stack Exchange.
If you're using an optimizing compiler chances are any form of n % <power of two>
will get optimized to n & <power of two minus one>
anyway, since they are equivalent but on pretty much every architecture I can think of the latter is much more efficient.
The former form expresses your intent more clearly, though a lot of developers will recognize n & 1
as a "faster version" of n % 2
.