During the evaluation of x + y
, under the hood Python calls x.__add__(y)
. Why do both x.__add__(y)
and int.__add__(x, y)
produce the same output even when it is object.__add__(self, other)
which contains self?
To answer the title, they don't in the general case, but they do in practice because you don't see methods being overridden at the instance level.
Normally, the call x.method(y)
binds the function object method
in the class to instance x
before calling. The result is something equivalent to type(x).method.__get__(x, type(x))(y)
.
For normal functions, the descriptor __get__
operation returns a bound method object that is approximately a partial function object with the first argument filed in as self
. You then call it with all the other arguments.
But what if you have something like this:
class C:
def method(self, y):
return y + 1
def method2(self, y):
return y + 2
x = C()
x.method = method2.__get__(x, C)
print(C.method(x, 2)) # 3
print(x.method(2)) # 4
Since function objects are non-data descriptors, x.method
will invoke the pre-bound method object in the instance dictionary instead of binding to the descriptor in the class dictionary.
int
is an immutable class, mostly implemented in C, and you can't assign instance methods to it. So if type(x) == int
, then x.__add__(y)
is equivalent to int.__add__.__get__(x, int)(y)
, which is functionally equivalent to int.__add__(x, y)
.
The caveat here is that using dunder methods that implement operators has special behavior. Python optimizes the statement x + y
to type(x).__add__(x, y)
, not x.__add__(y)
. As shown here, this saves on a couple of namespace lookups and a method binding operation. The consequence is that if you were to override __add__
on an instance, the instance method would be ignored in favor of the class definition of __add__
.