Search code examples
pythonperformancemultiprocessingpoolptvs

Python Multiprocessing speed with single process


I've found some behavior with python multiprocessing I'm having difficulties understanding. When using Pool, even if it is a single process, it performs much, much faster.

Why is that? Does multiprocessing somehow optimize the code?

import time
from multiprocessing import Pool


fib_input = [24] * 10

def fib(n):
    if n in [0,1]:
        return 1
    return fib(n-1)+fib(n-2)


def main1():
    with Pool(processes=1) as p:
        results = p.map(fib, fib_input)
    print (results)


def main2():
    results = list(map(fib, fib_input))
    print (results)


if __name__ == "__main__":
    start_time = time.time()
    main1()
    print("--- %s seconds ---" % (time.time() - start_time))

    start_time = time.time()
    main2()
    print("--- %s seconds ---" % (time.time() - start_time))

Output:

[75025, 75025, 75025, 75025, 75025, 75025, 75025, 75025, 75025, 75025]
--- 0.47702741622924805 seconds ---
[75025, 75025, 75025, 75025, 75025, 75025, 75025, 75025, 75025, 75025]
--- 7.922452926635742 seconds ---

Solution

  • Ok. Thanks to comments I figured out my bad. Thanks guys!

    A rookie mistake.

    I was using Visual Studio PTVS. That is where the slowdown was from. I've changed the dropdown 'build' to Release, however pressing f5 was still running debug mode, while I was convinced it was a clean run.

    Running it in cmd outside did the trick. Later I figured that also ctrl-f5 starts without debugging.

    Thanks for the help.