I was looking at F# doc on bitwise ops:
Bitwise right-shift operator. The result is the first operand with bits shifted right by the number of bits in the second operand. Bits shifted off the least significant position are not rotated into the most significant position. For unsigned types, the most significant bits are padded with zeros. For signed types, the most significant bits are padded with ones. The type of the second argument is int32.
What was the motivation behind this design choice comparing to C++ language (and probably C too) where MSB are padded with zeros? E.g:
int mask = -2147483648 >> 1; // C++ code
where -2147483648 =
10000000 00000000 00000000 00000000
and mask is equal to 1073741824
where 1073741824 =
01000000 00000000 00000000 00000000
Now if you write same code in F# (or C#), this will indeed pad MSB with ones and you'll get -1073741824.
where -1073741824 =
11000000 00000000 00000000 00000000
To answer the reformed question (in the comments):
The C and C++ standards do not define the result of right-shifting a negative value (it's either implementation-defined, or undefined, I can't remember which).
This is because the standard was defined to reflect the lowest common denominator in terms of underlying instruction set. Enforcing a true arithmetic shift, for instance, takes several instructions if the instruction set doesn't contain an asr
primitive. This is further complicated by the fact that the standard mandates either one's or two's complement representation.