I was curious as to how exactly the bitwise NOT
operation performs inversion so efficiently and what it looks like 'under the hood' so I tried to implement it myself. As seen below:
void bitwiseNOT(char* binaryString) {
int binaryLength = strlen(binaryString);
for (int i = 0; i < binaryLength; i++) {
binaryString[i] = (binaryString[i] == '1') ? '0' : '1';
}
}
This correctly inverts binaryString
but is a lot less efficient than simply using the bitwise NOT
operator (~
), eg. by doing:
long int binary = 101010111010001011;
binary = ~binary;
What exactly happens at a low level that causes it to be quicker than my implementation?
The OP function bitwiseNot
is entirely different from C's ~
operator. The OP function is operating on a string, while ~
operates on integer types.
What is done "under the hood" depends on the implementation, i.e., the C Standard does not specify how to implement ~
. But it is likely that an implementation will just emit an assembly code instruction to perform the bitwise not operation. The OP bitwiseNOT
function is certainly not compiling down to a single instruction. At -O3 GCC compiles OP code to 155 lines of assembly code, while Clang compiles it to 85 lines. This is comparing apples to oranges since bitwiseNOT
and the not_wrapper
function below are doing different things, but the OP function generates significantly more assembly code than an application of C's bitwise ~
operator.
Looking at the results of compilation on the Godbolt Compiler Explorer, GCC and Clang seem to emit the same x86-64 assembly code at -O3 optimization for a simple wrapper function around ~
. In both implementations the ~
compiles down to a single not
instruction after loading the data.
C function:
int not_wrapper(int x) {
return ~x;
}
GCC 13.2:
not_wrapper:
mov eax, edi
not eax
ret
Clang 17.0.1:
not_wrapper: # @not_wrapper
mov eax, edi
not eax
ret