I have a trouble with understanding why we discard MSB when we do multiplication of 2's complements numbers.
Let's take 101
(decimal: -3) and 011
(decimal: 3) for example. Algorithm goes like this: first we double length of our numbers, then we do usual multiplication as we did in school with decimals, then we take (doubled length)*2 = 6 least significant bits.
Double lengths:
101 -> 111101
011 -> 000011
Do multiplication:
111101 * 000011 = 10110111
(decimal: -73)
As we see, this result is wrong.
10110111 -> 110111
(decimal: -9)And so result became magically right. How can this be explained? I understand that MSB is kind of special and the same rules I've used in school cannot be 100% suitable for 2's complements, but while I fully understand school rules of multiplication, I can't wrap my head around the last step of 2's complement multiplication (first two steps I understand).
Your multiplication is simply incorrect. You did an unsigned multiplication. In other words this:
111101
000011 x
------
111101
111101
000000
000000
000000 +
----------
0010110111
Or in decimal 61 x 3 = 183. But a signed multiplication requires extending the sign bit in the partial products as well. Like this:
111101
000011 x
------
1111111101
111111101
00000000
0000000
000000 +
----------
1111110111
So now you correctly compute -3 x 3 = -9. This distinction matters for processors as well, compare MUL vs IMUL on Intel processors.