Search code examples
pythonbindingsetattr

How to bind a type's `__init__` or `__new__` to some other class instance?


I was trying to set a type attribute to a class using the Python built-in setattr. I've declared __new__ and __init__ methods in the type to see what their parameters would be and surprisingly they're not being bound to receive the class instance. I've read the Python docs on setattr and descriptors and I've performed some tests in the interpreter and I haven't found a way to bind the type's __new__ or __init__ methods to the class instance.

This is the code fragment I've been toying with:

T = type("T", (object,), {"__new__": lambda cls: print(f"T.__new__: {cls}") or object.__new__(cls), "__init__": lambda self: print(f"T.__init__: {self}")})

T()
# T.__new__: <class '__main__.T'>
# T.__init__: <__main__.T object at 0x7f970b667c10>
# <__main__.T object at 0x7f970b667c10>

class A:
    pass

setattr(A, "T", T)

A.T()
# T.__new__: <class '__main__.T'>
# T.__init__: <__main__.T object at 0x7f970b6675e0>
# <__main__.T object at 0x7f970b6675e0>

A().T()
# T.__new__: <class '__main__.T'>
# T.__init__: <__main__.T object at 0x7f970b667ee0>
# <__main__.T object at 0x7f970b667ee0> 

Essentially I want to know how can I make T receive the instance of A in the __new__ or __init__ methods. I believe that I didn't fully understand how setattr actually works and I'm misusing it or the way to accomplish this behaviour is not related to setattr at all.

PS.: Declaring T as a regular class changes nothing and declaring T::__get__ changes nothing as well.


Solution

  • The setattr function is just a way to programatically do what you could do with a normal assignment. Your call setattr(A, "T", T) is exactly the same as doing A.T = T. It doesn't help you achieve what you seem to want, which is for the class T to have binding behavior when looked up in A (or maybe in an A instance).

    While you could make a metaclass that makes the T type you declare as an attribute of the A class a descriptor, a much simpler approach is probably to write a method of A that returns an instance of the T class without actually being the class itself.

    class T:
        def __init__(self, a):
            self.a = a
    
    class A:
        def make_T(self):
            return T(self)
    

    Now you can do A().make_T() and you'll get a T instance that was passed the A instance as an argument to its __init__ method. If you want to, you can even rename make_T to T, and it will mostly work like you intended with your nested classes. It's not quite the same, since you can't use A.T as a class in other contexts, like isinstance checks. Using a name like make_T is a little bit clearer that it's a factory method, not a class itself.

    If you really do need to put the class T inside of A, here's the metaclass approach:

    class BindingInnerClass(type):
        def __get__(cls, obj, owner=None):
            if obj is None:
                return cls
    
            class BoundSubclass(cls):
                if cls.__new__ is not object.__new__:
                    def __new__(subcls, *args, **kwargs):
                        return super().__new__(subcls, obj, *args, **kwargs)
    
                if cls.__init__ is not object.__init__:
                    def __init__(self, *args, **kwargs):
                        super().__init__(obj, *args, **kwargs)
    
            setattr(obj, cls.__name__,  BoundSubclass)
            return BoundSubclass
    
    class A:
        class T(metaclass=BindingInnerClass):
            def __init__(self, a):  # the metaclass will also work if you define __new__
                self.a = a
    
    a = A()
    t = a.T()
    print(a, t.a) # prints the same object twice
    print(isinstance(t, a.T), isinstance(t, A.T), isinstance(t, A().T)) # True True False
    

    That metaclass is a lot more complicated and subtle than code really should be if you want to be able to read and maintain it. It creates a subclass of T for each A instance you look the original class up on. That might be very confusing, in some situations (like the last isinstance check in the example code)!

    Here are some of the subtleties: We need to be selective about which of __init__ and __new__ we create because if we unconditionally create both, we'll get errors if T doesn't define them both as taking obj as a positional argument. Using naive binding (the way methods do), you'd end up with a potentially infinite number of classes, since you'd create a new subclass for every lookup (a.T would be a different class each time). To avoid that, I cache the subclasses using setattr (bringing us back full circle!).

    I'd strongly recommend against using this kind of design for any serious code. This class architecture really stinks of trying to force Python into a design that would fit more naturally in some other programming language, where inner classes are a normal thing. It's almost certainly unnecessary to design the classes this way, there's likely to be a slightly modified design that is much more natural to Python's class model. Do yourself (and anyone who ever needs to read your code in the future) a huge favor and figure out what that better design is, rather than using a metaclass monstrosity that will be very tricky to understand or modify.