Discussion forum for David Beazley

That's a nice event loop, it'd be a shame


With async being all the rage these days, I’m a bit surprised that there hasn’t been more of an rumble over the fact that the GIL is still broken. No, I’m not talking about it’s removal entirely. I’m referring to the fact that launching a computationally intensive thread is an easy way to kill the responsiveness of an event loop almost entirely. Let’s say you’ve got a little asyncio code like this:

import asyncio

async def echo_client(reader, writer):
    addr = writer.get_extra_info('peername')
    print('Connection from', addr)
    while True:
        data = await reader.read(100000)
        if not data:
        await writer.drain()
    print('Connection closed')

if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    coro = asyncio.start_server(echo_client, '', 25000, loop=loop)

Running this on a little benchmark, I see it taking about 16000 requests/second. The exact details don’t matter, but let’s kill the performance. Just modify the code to launch a CPU-intensive thread:

def fib(n):
    if n < 2:
    return 1
        return fib(n-1) + fib(n-2)

if __name__ == '__main__':
    import concurrent.futures
    executor = concurrent.futures.ThreadPoolExecutor()

    loop = asyncio.get_event_loop()
    coro = asyncio.start_server(echo_client, '', 25000, loop=loop)
    loop.run_in_executor(executor, fib, 50)

Now the performance drops down to about 135 requests/second. Excellent. For those of you keeping score, that’s only a drop by a factor of about 120 (Just as a note, my Curio project is negatively affected in exactly the same manner).

So why does it happen? Remember my GIL Talk from 2010 and the fact that the GIL was “fixed” in Python 3.2? Well, it wasn’t fixed. It was merely changed into something else. More details about what’s happening can be found in my RuPy’2011 talk.

I sometimes wonder if it’s time to revisit this issue and the idea of thread priorities ;-).


Moral of this story–be super careful combining threads and event loops. Stick to I/O. Use separate processes for CPU-intensive operations.


We need fib() in standard lib.

Ahem. Yeah, this is such a landmine.

  • If you have work to do, throw it in a pool.
  • Can I do it in asyncio? Oh, I see, run_in_executor…

Maybe we need a table somewhere that goes like this

              |     main thread |      thread pool |  process pool
IO intensive  |          no way |            maybe |      seems ok
CPU intensive |  doing it wrong |  nice try but no |   what choice
              |                 |                  |   do you have


I agree, I have never understood why its OK for Python to still have this problem. I have seen all kinds of approaches, (PyParallel, STM, Gilectomy, Stackless) but none of them have gotten any traction (like in becoming the standard CPython)

That being said Larry Hastings Gilectomy (https://github.com/larryhastings/gilectomy) seams like a good approach, but still needs a lot of work.

It would be great if async could cross threads.



I think this has something to do with Python being slow for CPU intensive applications. So when you would reach a point where you’d want to use thread/process pool for compute-heavy stuff, you’d probably run into Python being slow for these applications anyway, and then switch to something like Go (or C++ module).

The thing is, Python simply is not performant enough for CPU intensive stuff (like shown here).


Despite the presence of the GIL, there’s still a part of me that thinks something could be gained by tweaking it a bit. Even having the ability to run the async event loop thread at higher priority might improve a lot of things like response time (the weird benchmark here). If I get a chance, I should revisit this and try some experiments.


Well, you have been deeper down that rabbit-holde than must of us, so that’s something you know more about. If you need help in some way I would be happy to assist, even though it has been a while since I was a C-hacker :slight_smile: