Search code examples
python-2.7multiprocessingsubprocesspickletraceback

Can a script start a process that starts other child processes using Multiprocessing?


I have not been able to find an answer to this confusing dilemma, and I am wondering if it is the source of an error that is unrecognizable to me. I am running on 64 bit windows 7, and I am currently writing a game where the main process should be able to spawn multiple processes using the Multiprocessing module. Each of those sub-processes then also spawns a single additional process that runs the graphics library using the Multiprocessing module.

When I attempt to run the script (both from IDLE and by running the file from command prompt), I get a traceback that reads:

Traceback (most recent call last):
  File "C:\Users\David\Desktop\Py\split\multiverse.py", line 141, in multiButtonPress
    self.universeList[0].start()
  File "C:\Python27\lib\multiprocessing\process.py", line 130, in start
    self._popen = Popen(self)
  File "C:\Python27\lib\multiprocessing\forking.py", line 277, in __init__
    dump(process_obj, to_child, HIGHEST_PROTOCOL)
  File "C:\Python27\lib\multiprocessing\forking.py", line 199, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Python27\lib\pickle.py", line 224, in dump
    self.save(obj)
  File "C:\Python27\lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\lib\pickle.py", line 306, in save
    rv = reduce(self.proto)
  File "C:\Python27\lib\multiprocessing\managers.py", line 484, in __reduce__
    return type(self).from_address, \
AttributeError: type object 'SyncManager' has no attribute 'from_address'

SyncManager is a class found in the multiprocessing library. Is the fact that my subprocess contains an object that is an instance of Process messing with its picklability? If so, is there a way to remedy this without having to completely redesign the system?


Solution

  • The only limit you'll find on creating grandchildren processes is if your child processes are started with daemon=True. As state in the docs for multiprocessing.Process.daemon:

    Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits. Additionally, these are not Unix daemons or services, they are normal processes that will be terminated (and not joined) if non-daemonic processes have exited.

    As long as you do proc.daemon = False prior to proc.start(), you'll be able to create grandchildren processes in proc. The error you're seeing is related to trying to pass a non-picklable object (multiprocessing.SyncManager) to your child process. This is a separate problem, and if you need help with that, you should post a new question.