PYTHON 3 EXAMPLE
>>> import six
>>> six.PY3
True
>>> import mock
>>> x = mock.MagicMock()
>>> y = min(x,2)
Traceback (most recent call last):
File "C:\temp\athenapkgs\ext3py27-232140\noarch\pylib\site-packages\six.py", line 703, in reraise
raise value
File "<stdin>", line 1, in <module>
TypeError: '<' not supported between instances of 'int' and 'MagicMock'
PYTHON 2 EXAMPLE
>>> import mock
>>> import six
>>> six.PY2
True
>>> x=mock.MagicMock()
>>> y = min(x,2)
>>> y
2
>>>
Two examples shown, I need to make Py3 not return error - what is wrong?
I messed around trying to find out how builtin functions work. I assume it is something to do with __int__
being called from the magic mock by the minimum builtin for python 2 but not python 3. I assume that because calling mock.MagicMock().__int__()
on each REPL version returns an integer 1.
This has nothing to do with __int__
, because __int__
isn't involved in either scenario. mock.MagicMock()
is not comparable to int
in either case (you can coerce it to int
thanks to the __int__
overload, but it isn't an int
, and doesn't look like one to int
).
The only special methods that matter for built-ins comparing objects are the rich comparison operators __lt__
/__gt__
(and on Python 2 in some scenarios, __le__
/__ge__
or __cmp__
, though the latter is deprecated entirely and removed in Py3), and while MagicMock
technically implements them, it implements them to immediately return NotImplemented
(which is equivalent to not defining them at all for most purposes), which the Python interpreter interprets to mean "I don't know how to compare myself to that other thing, ask the other thing if it knows how to compare itself to me"; int
of course has no idea how to compare itself to a MagicMock
and says the same thing, which is where we get the difference between Python 2 and 3.
On Python 3, when both types either lack __lt__
/__gt__
or both of them return NotImplemented
, Python 3 converts that to a TypeError
(because the types are in fact incomparable, and that's the right thing to do).
On Python 2 though, it still "works" (for a terrible definition of "works"), because Python 2 had a poorly conceived fallback comparison for <
and company; if the types were incompatible (lacked __lt__
/__gt__
or they returned NotImplemented
when called in either direction), it would use a "default order comparison (<, >, <=, and >=) [that] gives a consistent but arbitrary order." The comparison was (among other things) based on stringified names of the types involved, and has nothing to do with their values (no matter what int
value you compare to your MagicMock
, the MagicMock
is always going to compare the same). I'll note, on my Python 2 interpreter, the MagicMock
is always produced by min(x, ANY_INT_HERE)
, it never produces the int
value, so I can't reproduce what you're seeing on Python 2.
If you want the MagicMock
to behave as an int
, coerce it at time of use, e.g.:
y = min(int(x), 2)
so it's an actual int
on all versions of Python.