I'm working on writing test cases for some Python code that uses asyncio. I'm noticing a significant performance degradation when my test classes inherit from unittest.IsolatedAsyncioTestCase
vs. using the non-async UnitTest and manually calling a coroutine using asyncio.run
I've tested this on Python 3.13.1 and 3.10.10 on my personal desktop (Mac Studio M1 Max) and on a Linode Nanode VPS. I'm noticing around 10x slowdowns across testing platforms.
Below is a simplified example, my actual use case is far more complicated.
# test.py
import asyncio
from unittest import TestCase
from unittest import IsolatedAsyncioTestCase
async def f():
for i in range(100000):
await asyncio.sleep(0)
class TestManualAsync(TestCase):
def test_async(self):
asyncio.run(f())
class TestIsolatedAsync(IsolatedAsyncioTestCase):
async def test_async(self):
await f()
$ python3 -m unittest test.py -k TestManualAsync
.
----------------------------------------------------------------------
Ran 1 test in 0.614s
$ python3 -m unittest test.py -k TestIsolatedAsync
.
----------------------------------------------------------------------
Ran 1 test in 5.115s
You can see from the above it's much faster to just trigger a coroutine in a traditional test class. I don't see why this should be the case.
It's just because it runs the event loop in debug mode:
runner = asyncio.Runner(debug=True, loop_factory=self.loop_factory)
which adds a bunch of overhead. Try using asyncio.run(f(), debug=True)
in TestManualAsync
, and it'll take about as much time as TestIsolatedAsync
does.