[The forum software is claiming that multiple people have clicked to flag this post as spam, so it was hidden until I edit it. Not sure what’s going on with that?]
This is more of a high-level design discussion, so I guess I’ll try putting it here instead of github issues and see what happens!
I’ve recently been trying to understand the exact semantics of curio’s primitive operations around rescheduling/blocking/cancellation. Like, when I do
await curio.whatever(), then:
- Is it possible for the current task to block?
- Is it possible for the current task to get scheduled away, and allow some other task to run in the mean time? (This is not quite the same as blocking.)
- It is possible for the operation to raise a
- If it does raise
CancelledError, then what can I conclude about the success/failure of the original operation?
- Is there some systematic rule for answering these questions? Can we document it?
In general these questions apply to all of curio’s public API, but as a starting point I’ve just been looking at the low-level traps. It turns out that currently, traps fall roughly into three categories:
- “Synchronous traps”: always return immediately, never schedule away, are never a cancellation point (example:
_trap_current_task). I just submitted a patch to handle these in a more uniform manner.
- “Blocking traps”: always block; allow cancellation; cancellation means the trap didn’t happen (example:
- “Rescheduling traps”: never block, but do trigger a “reschedule”, so other tasks can/will run before the trap returns (example:
_trap_spawn). It turns out that these are not cancellation points, though the logic is somewhat confusing.
There are also some weirdos, like the
_trap_sleep trap usually blocks, but might reschedule instead (if you just did
_trap_cancel is also weird. If we want to go cancel a task, then how we handle that depends on what it was doing when it stopped running. (We know that it isn’t currently running, because the task that’s calling
_trap_cancel is running.) Synchronous traps aren’t an issue here, because we know that by definition the victim didn’t stop running inside a synchronous trap. And blocking traps are OK – they all have the machinery to cancel the blocking and raise the
CancelledError. But there are two other cases too: (1) it’s in the middle of a blocking trap, but the operation already succeeded, and it just hasn’t been scheduled again yet. In this case we missed our chance to cancel this operation, so we have to wait for it to block again. (2) It’s in a rescheduling trap, which looks identical to situation (1). That’s why rescheduling traps aren’t cancellation points. Either way, we can’t just inject a
CancelledError immediately. So what happens right now is that if the victim isn’t cancellable, then the task that’s trying to do the cancellation keeps rescheduling itself and retrying the cancellation, until eventually the victim task does block on something, at which point we kill it.
I’d like to propose the following API design principle: every trap should be either “synchronous” or “blocking”, with the semantics described above. And, traps should have the “synchronous” semantics whenever possible.
(Rationale: synchronous traps are much easier to reason about; yield points and cancellation points are both brain-melting, especially when you have too many of them. So better to leave them out by default when possible, and then let users add them back in if necessary.)
If this is accepted, then here are the traps that would need to change (they don’t currently fit into either category), and some discussion of the trade-offs:
_trap_sleep, b/c of the
sleep(0)issue: This is actually a nasty bug, because inserting a task that just does
sleep(0)in a loop should allow other tasks to run, but it doesn’t quite: since the looping task immediately gets scheduled onto the
readyqueue, we can never finish this iteration through the run loop; so other tasks that are already runnable do get to run, but we never poll for I/O to let tasks become runnable. So we really should just make
sleep(0)“blocking” in the above sense. Fixing this is easy and I have a patch for it.
_trap_join: This is similar to
_trap_sleep– it’s usually blocking, but if the task has already terminated then it’s rescheduling. The easy fix would be, if the task has already terminated, then it’s equivalent to
_trap_reschedule_tasks: This is what operations like
Queue.putuse to resume any tasks that are blocked in
Queue.get. Right now, these operations have slightly quirky (but deterministic) semantics: the task that does the waking always reschedules itself to run after the tasks that were awoken. There are some tests that fail if this is changed. Still, I think that the benefits of synchronicity are worth changing this. I can’t really think of any particular advantages to the current system.
_trap_spawn: The question of whether child-runs-first or parent-runs-first is an interesting one in pre-emptive kernels (see e.g.), but I can’t see how any of the arguments really carry over, and parent-runs-first (i.e., making
_trap_spawnreturn synchronously) is easier to reason about.
_trap_cancel: For the reasons described above. I think if we standardize how blocking traps work, then that should make it relatively easy to rework the cancellation logic so that, instead of the canceling task having to hang around and repeatedly retry cancels, we could make it so that we can enqueue a cancel to a task, and then blocking traps check for that and immediately raise an exception if its found.
_trap_spawn synchronous in particular would also make it easier to experiment with alternative, non-FIFO scheduling policies, because currently those two traps directly expose the FIFO-ness of the scheduler as part of their public API.